Open In Colab

LICENSING NOTICE¶

Note that all users who use Vital DB, an open biosignal dataset, must agree to the Data Use Agreement below. If you do not agree, please close this window. The Data Use Agreement is available here: https://vitaldb.net/dataset/#h.vcpgs1yemdb5

This is the development version of the project code¶

For the Project Draft submission see the DL4H_Team_24_Project_Draft.ipynb notebook in the project repository.

Project repository¶

The project repository can be found at: https://github.com/abarrie2/cs598-dlh-project

Introduction¶

This project aims to reproduce findings from the paper titled "Predicting intraoperative hypotension using deep learning with waveforms of arterial blood pressure, electroencephalogram, and electrocardiogram: Retrospective study" by Jo Y-Y et al. (2022) [1]. This study introduces a deep learning model that predicts intraoperative hypotension (IOH) events before they occur, utilizing a combination of arterial blood pressure (ABP), electroencephalogram (EEG), and electrocardiogram (ECG) signals.

Background of the Problem¶

Intraoperative hypotension (IOH) is a common and significant surgical complication defined by a mean arterial pressure drop below 65 mmHg. It is associated with increased risks of myocardial infarction, acute kidney injury, and heightened postoperative mortality. Effective prediction and timely intervention can substantially enhance patient outcomes.

Evolution of IOH Prediction¶

Initial attempts to predict IOH primarily used arterial blood pressure (ABP) waveforms. A foundational study by Hatib F et al. (2018) titled "Machine-learning Algorithm to Predict Hypotension Based on High-fidelity Arterial Pressure Waveform Analysis" [2] showed that machine learning could forecast IOH events using ABP with reasonable accuracy. This finding spurred further research into utilizing various physiological signals for IOH prediction.

Subsequent advancements included the development of the Acumen™ hypotension prediction index, which was studied in "AcumenTM hypotension prediction index guidance for prevention and treatment of hypotension in noncardiac surgery: a prospective, single-arm, multicenter trial" by Bao X et al. (2024) [3]. This trial integrated a hypotension prediction index into blood pressure monitoring equipment, demonstrating its effectiveness in reducing the number and duration of IOH events during surgeries. Further study is needed to determine whether this resultant reduction in IOH events transalates into improved postoperative patient outcomes.

Current Study¶

Building on these advancements, the paper by Jo Y-Y et al. (2022) proposes a deep learning approach that enhances prediction accuracy by incorporating EEG and ECG signals along with ABP. This multi-modal method, evaluated over prediction windows of 3, 5, 10, and 15 minutes, aims to provide a comprehensive physiological profile that could predict IOH more accurately and earlier. Their results indicate that the combination of ABP and EEG significantly improves performance metrics such as AUROC and AUPRC, outperforming models that use fewer signals or different combinations.

Our project seeks to reproduce and verify Jo Y-Y et al.'s results to assess whether this integrated approach can indeed improve IOH prediction accuracy, thereby potentially enhancing surgical safety and patient outcomes.

Scope of Reproducibility:¶

The original paper investigated the following hypotheses:

  1. Hypothesis 1: A model using ABP and ECG will outperform a model using ABP alone in predicting IOH.
  2. Hypothesis 2: A model using ABP and EEG will outperform a model using ABP alone in predicting IOH.
  3. Hypothesis 3: A model using ABP, EEG, and ECG will outperform a model using ABP alone in predicting IOH.

Results were compared using AUROC and AUPRC scores. Based on the results described in the original paper, we expect that Hypothesis 2 will be confirmed, and that Hypotheses 1 and 3 will not be confirmed.

In order to perform the corresponding experiments, we will implement a CNN-based model that can be configured to train and infer using the following four model variations:

  1. ABP data alone
  2. ABP and ECG data
  3. ABP and EEG data
  4. ABP, ECG, and EEG data

We will measure the performance of these configurations using the same AUROC and AUPRC metrics as used in the original paper. To test hypothesis 1 we will compare the AUROC and AUPRC measures between model variation 1 and model variation 2. To test hypothesis 2 we will compare the AUROC and AUPRC measures between model variation 1 and model variation 3. To test hypothesis 3 we will compare the AUROC and AUPRC measures between model variation 1 and model variation 4. For all of the above measures and experiment combinations, we will operate multiple experiments where the time-to-IOH event prediction will use the following prediction windows:

  1. 3 minutes before event
  2. 5 minutes before event
  3. 10 minutes before event
  4. 15 minutes before event

In the event that we are compute-bound, we will prioritize the 3-minute prediction window experiments as they are the most relevant to the original paper's findings.

From the original paper, the predictive power of ABP, ECG and ABP + ECG models at 3-, 5-, 10- and 15-minute prediction windows: Predictive power of ABP, ECG and ABP + ECG models at 3-, 5-, 10- and 15-minute prediction windows

Modifications made for demo mode¶

In order to demonstrate the functioning of the code in a short (ie, <8 minute limit) the following options and modifications were used:

  1. MAX_CASES was set to 20. The total number of cases to be used in the full training set is 3296, but the smaller numbers allows demonstration of each section of the pipeline.
  2. vitaldb_cache is prepopulated in Google Colab. The cache file is approx. 800MB and contains the raw and mini-fied copies of the source dataset and is downloaded from Google Drive. This is much faster than using the vitaldb API, but is again only a fraction of the data. The full dataset can be downloaded with the API or prepopulated by following the instructions in the "Bulk Data Download" section below.
  3. max_epochs is set to 6. With the small dataset, training is fast and shows the decreasing training and validation losses. In the full model run, max_epochs will be set to 100. In both cases early stopping is enabled and will stop training if the validation losses stop decreasing for five consecutive epochs.
  4. Only the "ABP + EEG" combination will be run. In the final report, additional combinations will be run, as discussed later.
  5. Only the 3-minute prediction window will be run. In the final report, additional prediction windows (5, 10 and 15 minutes) will be run, as discussed later.
  6. No ablations are run in the demo. These will be completed for the final report.

Methodology¶

Methodology from Final Rubrik¶

  • Environment
    • Python version - DONE
    • Dependencies/packages needed - DONE
  • Data
    • Data download instruction - DONE
    • Data descriptions with helpful charts and visualizations - DONE
    • Preprocessing code + command - DONE
  • Model
    • Citation to the original paper - DONE
    • Link to the original paper’s repo (if applicable) - DONE
    • Model descriptions - DONE
    • Implementation code - DONE
    • Pretrained model (if applicable)
  • Training
    • Hyperparams - DONE
      • Report at least 3 types of hyperparameters such as learning rate, batch size, hidden size, dropout
    • Computational requirements
      • Report at least 3 types of requirements such as type of hardware, average runtime for each epoch, total number of trials, GPU hrs used, # training epochs
      • Training code
  • Evaluation
    • Metrics descriptions
    • Evaluation code

The methodology section is composed of the following subsections: Environment, Data and Model.

  • Environment: This section describes the setup of the environment, including the installation of necessary libraries and the configuration of the runtime environment.
  • Data: This section describes the dataset used in the study, including its collection and preprocessing.
    • Data Collection: This section describes the process of downloading the dataset from VitalDB and populating the local data cache.
    • Data Preprocessing: This section describes the preprocessing steps applied to the dataset, including data selection, data cleaning, and feature extraction.
  • Model: This section describes the deep learning model used in the study, including its implementation, training, and evaluation.
    • Model Implementation: This section describes the implementation of the deep learning model, including the architecture, loss function, and optimization algorithm.
    • Model Training: This section describes the training process, including the training loop, hyperparameters, and training strategy.
    • Model Evaluation: This section describes the evaluation process, including the metrics used, the evaluation strategy, and the results obtained.

Environment¶

Create environment¶

The environment setup differs based on whether you are running the code on a local machine or on Google Colab. The following sections provide instructions for setting up the environment in each case.

Local machine¶

Create conda environment for the project using the environment.yml file:

conda env create --prefix .envs/dlh-team24 -f environment.yml

Activate the environment with:

conda activate .envs/dlh-team24

This environment specifies Python 3.12.2.

Google Colab¶

The following code snippet installs the required packages and downloads the necessary files in a Google Colab environment:

In [1]:
# Google Colab environments have a `/content` directory. Use this as a proxy for running Colab-only code
COLAB_ENV = "google.colab" in str(get_ipython())
if COLAB_ENV:
    #install vitaldb
    %pip install vitaldb

    # Executing in Colab therefore download cached preprocessed data.
    # TODO: Integrate this with the setup local cache data section below.
    # Check for file existence before overwriting.
    import gdown
    gdown.download(id="15b5Nfhgj3McSO2GmkVUKkhSSxQXX14hJ", output="vitaldb_cache.tgz")
    !tar -zxf vitaldb_cache.tgz

    # Download sqi_filter.csv from github repo
    !wget https://raw.githubusercontent.com/abarrie2/cs598-dlh-project/main/sqi_filter.csv

All other required packages are already installed in the Google Colab environment. As of May 5, 2024, Google Colab uses Python 3.10.12.

Load environment¶

In [2]:
# Import packages
import os
import random
import sys
import uuid
import copy
from collections import defaultdict
from glob import glob

from timeit import default_timer as timer

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.signal import butter, lfilter, spectrogram
from sklearn.manifold import TSNE
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, roc_auc_score, precision_recall_curve, auc, confusion_matrix
from sklearn.metrics import RocCurveDisplay, PrecisionRecallDisplay, average_precision_score
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
import torch
from torch.utils.data import Dataset
import vitaldb
import h5py

import torch.nn as nn
import torch.nn.functional as F
from tqdm import tqdm
from datetime import datetime

Start a timer to measure notebook runtime:

In [3]:
global_time_start = timer()

Set random seeds to generate consistent results:

In [4]:
RANDOM_SEED = 42

def reset_random_state():
    random.seed(RANDOM_SEED)
    np.random.seed(RANDOM_SEED)
    torch.manual_seed(RANDOM_SEED)
    if torch.cuda.is_available():
        torch.cuda.manual_seed(RANDOM_SEED)
        torch.cuda.manual_seed_all(RANDOM_SEED)
        torch.backends.cudnn.deterministic = True
        torch.backends.cudnn.benchmark = False
    os.environ["PYTHONHASHSEED"] = str(RANDOM_SEED)
    
reset_random_state()

Set device to GPU or MPS if available

In [5]:
device = torch.device("cuda" if torch.cuda.is_available() else "mps" if (torch.backends.mps.is_available() and torch.backends.mps.is_built()) else "cpu")
print(f"Using device: {device}")
Using device: mps

Define class to print to console and simultaneously save to file:

In [6]:
class ForkedStdout:
    def __init__(self, file_path):
        self.file = open(file_path, 'w')
        self.stdout = sys.stdout

    def write(self, message):
        self.stdout.write(message)
        self.file.write(message)

    def flush(self):
        self.stdout.flush()
        self.file.flush()

    def __enter__(self):
        sys.stdout = self

    def __exit__(self, exc_type, exc_val, exc_tb):
        sys.stdout = self.stdout
        self.file.close()

Data¶

Data Description¶

Source¶

Data for this project is sourced from the open biosignal VitalDB dataset as described in "VitalDB, a high-fidelity multi-parameter vital signs database in surgical patients" by Lee H-C et al. (2022) [4], which contains perioperative vital signs and numerical data from 6,388 cases of non-cardiac (general, thoracic, urological, and gynecological) surgery patients who underwent routine or emergency surgery at Seoul National University Hospital between 2016 and 2017. The dataset includes ABP, ECG, and EEG signals, as well as other physiological data. The dataset is available through an API and Python library, and at PhysioNet: https://physionet.org/content/vitaldb/1.0.0/

Statistics¶

Characteristics of the dataset: | Characteristic | Value | Details | |-----------------------|-----------------------------|------------------------| | Total number of cases | 6,388 | | | Sex (male) | 3,243 (50.8%) | | | Age (years) | 59 | Range: 48-68 | | Height (cm) | 162 | Range: 156-169 | | Weight (kg) | 61 | Range: 53-69 | | Tram-Rac 4A tracks | 6,355 (99.5%) | Sampling rate: 500Hz | | BIS Vista tracks | 5,566 (87.1%) | Sampling rate: 128Hz | | Case duration (min) | 189 | Range: 27-1041 |

Labels are only known after processing the data. In the original paper, there were an average of 1.6 IOH events per case and 5.7 non-events per case so we expect approximately 10,221 IOH events and 364,116 non-events in the dataset.

Data Processing¶

Data will be processed as follows:

  1. Load the dataset from VitalDB, or from a local cache if previously downloaded.
  2. Apply the inclusion and exclusion selection criteria to filter the dataset according to surgery metadata.
  3. Generate a minified dataset by discarding all tracks except ABP, ECG, and EEG.
  4. Preprocess the data by applying band-pass and z-score normalization to the ECG and EEG signals, and filtering out ABP signals below a Signal Quality Index (SQI) threshold.
  5. Generate event and non-event samples by extracting 60-second segments around IOH events and non-events.
  6. Split the dataset into training, validation, and test sets with a 6:1:3 ratio, ensuring that samples from a single case are not split across different sets to avoid data leakage.

Set Up Local Data Caches¶

VitalDB data is static, so local copies can be stored and reused to avoid expensive downloads and to speed up data processing.

The default directory defined below is in the project .gitignore file. If this is modified, the new directory should also be added to the project .gitignore.

In [7]:
VITALDB_CACHE = './vitaldb_cache'
VITAL_ALL = f"{VITALDB_CACHE}/vital_all"
VITAL_MINI = f"{VITALDB_CACHE}/vital_mini"
VITAL_METADATA = f"{VITALDB_CACHE}/metadata"
VITAL_MODELS = f"{VITALDB_CACHE}/models"
VITAL_RUNS = f"{VITALDB_CACHE}/runs"
VITAL_PREPROCESS_SCRATCH = f"{VITALDB_CACHE}/data_scratch"
VITAL_EXTRACTED_SEGMENTS = f"{VITALDB_CACHE}/segments_golden"
In [8]:
TRACK_CACHE = None
SEGMENT_CACHE = None

# when USE_MEMORY_CACHING is enabled, track data will be persisted in an in-memory cache. Not useful once we have already pre-extracted all event segments
# DON'T USE: Stores items in memory that are later not used. Causes OOM on segment extraction.
USE_MEMORY_CACHING = False

# When RESET_CACHE is set to True, it will ensure the TRACK_CACHE is disposed and recreated when we do dataset initialization.
# Use as a shortcut to wiping cache rather than restarting kernel
RESET_CACHE = False

PREDICTION_WINDOW = 3
#PREDICTION_WINDOW = 'ALL'

ALL_PREDICTION_WINDOWS = [3, 5, 10, 15]

# Maximum number of cases of interest for which to download data.
# Set to a small value (ex: 20) for demo purposes, else set to None to disable and download and process all.
MAX_CASES = None
#MAX_CASES = 300

# Preloading Cases: when true, all matched cases will have the _mini tracks extracted and put into in-mem dict
PRELOADING_CASES = False
PRELOADING_SEGMENTS = True
# Perform Data Preprocessing: do we want to take the raw vital file and extract segments of interest for training?
PERFORM_DATA_PREPROCESSING = False
In [9]:
if not os.path.exists(VITALDB_CACHE):
  os.mkdir(VITALDB_CACHE)
if not os.path.exists(VITAL_ALL):
  os.mkdir(VITAL_ALL)
if not os.path.exists(VITAL_MINI):
  os.mkdir(VITAL_MINI)
if not os.path.exists(VITAL_METADATA):
  os.mkdir(VITAL_METADATA)
if not os.path.exists(VITAL_MODELS):
  os.mkdir(VITAL_MODELS)
if not os.path.exists(VITAL_RUNS):
  os.mkdir(VITAL_RUNS)
if not os.path.exists(VITAL_PREPROCESS_SCRATCH):
  os.mkdir(VITAL_PREPROCESS_SCRATCH)
if not os.path.exists(VITAL_EXTRACTED_SEGMENTS):
  os.mkdir(VITAL_EXTRACTED_SEGMENTS)

print(os.listdir(VITALDB_CACHE))
['models_', 'models_old_0505', 'segments_filter_neg', 'segments_bak', 'runs_old', 'runs_03_15_parameter_tuning', 'segments_bak_0505', '.DS_Store', 'segments_filter_neg_pos', 'vital_mini_bak_0501', 'vital_all', 'segments_sizes_sp.txt', 'ABP_12_RESIDUAL_BLOCKS_64_BATCH_SIZE_1e-04_LEARNING_RATE_015_MINS__ALL_MAX_CASES_a8a3f484_0004.model', 'models_all_cases_baseline', 'segments_golden', 'models', 'docs', 'vital_mini.tar', 'data_scratch', 'segments_md5_sp.txt', 'vital_file_md5_mw.txt', 'segments_bak_0501', 'osfs', 'runs_03_15', 'vital_mini', 'segments_filter_none', 'vital_file_mini_md5_sp.txt', 'vital_file_mini_file_sizes_sp.txt', 'runs', 'metadata', 'runs_old_0505', 'segments', 'models_old', 'runs_03_segment_fixes', 'vital_file_md5_sp.txt', 'models_03_15_parameter_tuning']

Bulk Data Download¶

This step is not required, but will significantly speed up downstream processing and avoid a high volume of API requests to the VitalDB web site.

Note: The dataset is slightly different depending on whether it is downloaded from the API or from Physionet. In almost all cases, the relevant tracks are identical between the two, but this is not always true.

The cache population code checks if the .vital files are locally available, and can be populated by calling the vitaldb API or by manually prepopulating the cache (recommended)

  • Manually downloaded the dataset from the following site: https://physionet.org/content/vitaldb/1.0.0/
    • Download the zip file in a browser, or
    • Use wget -r -N -c -np https://physionet.org/files/vitaldb/1.0.0/ to download the files in a terminal
  • Move the contents of vital_files into the ${VITAL_ALL} directory.
In [10]:
# Returns the Pandas DataFrame for the specified dataset.
#   One of 'cases', 'labs', or 'trks'
# If the file exists locally, create and return the DataFrame.
# Else, download and cache the csv first, then return the DataFrame.
def vitaldb_dataframe_loader(dataset_name):
    if dataset_name not in ['cases', 'labs', 'trks']:
        raise ValueError(f'Invalid dataset name: {dataset_name}')
    file_path = f'{VITAL_METADATA}/{dataset_name}.csv'
    if os.path.isfile(file_path):
        print(f'{dataset_name}.csv exists locally.')
        df = pd.read_csv(file_path)
        return df
    else:
        print(f'downloading {dataset_name} and storing in the local cache for future reuse.')
        df = pd.read_csv(f'https://api.vitaldb.net/{dataset_name}')
        df.to_csv(file_path, index=False)
        return df

Exploratory Data Analysis¶

Cases¶

In [11]:
cases = vitaldb_dataframe_loader('cases')
cases = cases.set_index('caseid')
cases.shape
cases.csv exists locally.
Out[11]:
(6388, 73)
In [12]:
cases.index.nunique()
Out[12]:
6388
In [13]:
cases.head()
Out[13]:
subjectid casestart caseend anestart aneend opstart opend adm dis icu_days ... intraop_colloid intraop_ppf intraop_mdz intraop_ftn intraop_rocu intraop_vecu intraop_eph intraop_phe intraop_epi intraop_ca
caseid
1 5955 0 11542 -552 10848.0 1668 10368 -236220 627780 0 ... 0 120 0.0 100 70 0 10 0 0 0
2 2487 0 15741 -1039 14921.0 1721 14621 -221160 1506840 0 ... 0 150 0.0 0 100 0 20 0 0 0
3 2861 0 4394 -590 4210.0 1090 3010 -218640 40560 0 ... 0 0 0.0 0 50 0 0 0 0 0
4 1903 0 20990 -778 20222.0 2522 17822 -201120 576480 1 ... 0 80 0.0 100 100 0 50 0 0 0
5 4416 0 21531 -1009 22391.0 2591 20291 -67560 3734040 13 ... 0 0 0.0 0 160 0 10 900 0 2100

5 rows × 73 columns

In [14]:
cases['sex'].value_counts()
Out[14]:
sex
M    3243
F    3145
Name: count, dtype: int64

Tracks¶

In [15]:
trks = vitaldb_dataframe_loader('trks')
trks = trks.set_index('caseid')
trks.shape
trks.csv exists locally.
Out[15]:
(486449, 2)
In [16]:
trks.index.nunique()
Out[16]:
6388
In [17]:
trks.groupby('caseid')[['tid']].count().plot();
In [18]:
trks.groupby('caseid')[['tid']].count().hist();
In [19]:
trks.groupby('tname').count().sort_values(by='tid', ascending=False)
Out[19]:
tid
tname
Solar8000/HR 6387
Solar8000/PLETH_SPO2 6386
Solar8000/PLETH_HR 6386
Primus/CO2 6362
Primus/PAMB_MBAR 6361
... ...
Orchestra/AMD_VOL 1
Solar8000/ST_V5 1
Orchestra/NPS_VOL 1
Orchestra/AMD_RATE 1
Orchestra/VEC_VOL 1

196 rows × 1 columns

Parameters of Interest¶

Hemodynamic Parameters Reference¶

https://vitaldb.net/dataset/?query=overview#h.f7d712ycdpk2

SNUADC/ART

arterial blood pressure waveform

Parameter, Description, Type/Hz, Unit

SNUADC/ART, Arterial pressure wave, W/500, mmHg

In [20]:
trks[trks['tname'].str.contains('SNUADC/ART')].shape
Out[20]:
(3645, 2)

SNUADC/ECG_II

electrocardiogram waveform

Parameter, Description, Type/Hz, Unit

SNUADC/ECG_II, ECG lead II wave, W/500, mV

In [21]:
trks[trks['tname'].str.contains('SNUADC/ECG_II')].shape
Out[21]:
(6355, 2)

BIS/EEG1_WAV

electroencephalogram waveform

Parameter, Description, Type/Hz, Unit

BIS/EEG1_WAV, EEG wave from channel 1, W/128, uV

In [22]:
trks[trks['tname'].str.contains('BIS/EEG1_WAV')].shape
Out[22]:
(5871, 2)

Cases of Interest¶

These are the subset of case ids for which modelling and analysis will be performed based upon inclusion criteria and waveform data availability.

In [23]:
# TRACK NAMES is used for metadata analysis via API
TRACK_NAMES = ['SNUADC/ART', 'SNUADC/ECG_II', 'BIS/EEG1_WAV']
TRACK_SRATES = [500, 500, 128]
# EXTRACTION TRACK NAMES adds the EVENT track which is only used when doing actual file i/o
EXTRACTION_TRACK_NAMES = ['SNUADC/ART', 'SNUADC/ECG_II', 'BIS/EEG1_WAV', 'EVENT']
EXTRACTION_TRACK_SRATES = [500, 500, 128, 1]

As in the paper, select cases which meet the following criteria:

For patients, the inclusion criteria were as follows:

  1. adults (age >= 18)
  2. administered general anaesthesia
  3. undergone non-cardiac surgery.

For waveform data, the inclusion criteria were as follows:

  1. no missing monitoring for ABP, ECG, and EEG waveforms
  2. no cases containing false events or non-events due to poor signal quality (checked in second stage of data preprocessing)
In [24]:
# Adult
inclusion_1 = cases.loc[cases['age'] >= 18].index
print(f'{len(cases)-len(inclusion_1)} cases excluded, {len(inclusion_1)} remaining due to age criteria')

# General Anesthesia
inclusion_2 = cases.loc[cases['ane_type'] == 'General'].index
print(f'{len(cases)-len(inclusion_2)} cases excluded, {len(inclusion_2)} remaining due to anesthesia criteria')

# Non-cardiac surgery
inclusion_3 = cases.loc[
    ~cases['opname'].str.contains("cardiac", case=False)
    & ~cases['opname'].str.contains("aneurysmal", case=False)
].index
print(f'{len(cases)-len(inclusion_3)} cases excluded, {len(inclusion_3)} remaining due to non-cardiac surgery criteria')

# ABP, ECG, EEG waveforms
inclusion_4 = trks.loc[trks['tname'].isin(TRACK_NAMES)].index.value_counts()
inclusion_4 = inclusion_4[inclusion_4 == len(TRACK_NAMES)].index
print(f'{len(cases)-len(inclusion_4)} cases excluded, {len(inclusion_4)} remaining due to missing waveform data')

# SQI filter
# NOTE: this depends on a sqi_filter.csv generated by external processing
inclusion_5 = pd.read_csv('sqi_filter.csv', header=None, names=['caseid','sqi']).set_index('caseid').index
print(f'{len(cases)-len(inclusion_5)} cases excluded, {len(inclusion_5)} remaining due to SQI threshold not being met')

# Only include cases with known good waveforms.
exclusion_6 = pd.read_csv('malformed_tracks_filter.csv', header=None, names=['caseid']).set_index('caseid').index
inclusion_6 = cases.index.difference(exclusion_6)
print(f'{len(cases)-len(inclusion_6)} cases excluded, {len(inclusion_6)} remaining due to malformed waveforms')

cases_of_interest_idx = inclusion_1 \
    .intersection(inclusion_2) \
    .intersection(inclusion_3) \
    .intersection(inclusion_4) \
    .intersection(inclusion_5) \
    .intersection(inclusion_6)

cases_of_interest = cases.loc[cases_of_interest_idx]

print()
print(f'{cases_of_interest_idx.shape[0]} out of {cases.shape[0]} total cases remaining after exclusions applied')

# Trim cases of interest to MAX_CASES
if MAX_CASES:
    cases_of_interest_idx = cases_of_interest_idx[:MAX_CASES]
print(f'{cases_of_interest_idx.shape[0]} cases of interest selected')
57 cases excluded, 6331 remaining due to age criteria
345 cases excluded, 6043 remaining due to anesthesia criteria
14 cases excluded, 6374 remaining due to non-cardiac surgery criteria
3019 cases excluded, 3369 remaining due to missing waveform data
0 cases excluded, 6388 remaining due to SQI threshold not being met
533 cases excluded, 5855 remaining due to malformed waveforms

2763 out of 6388 total cases remaining after exclusions applied
2763 cases of interest selected
In [25]:
cases_of_interest.head(n=5)
Out[25]:
subjectid casestart caseend anestart aneend opstart opend adm dis icu_days ... intraop_colloid intraop_ppf intraop_mdz intraop_ftn intraop_rocu intraop_vecu intraop_eph intraop_phe intraop_epi intraop_ca
caseid
1 5955 0 11542 -552 10848.0 1668 10368 -236220 627780 0 ... 0 120 0.0 100 70 0 10 0 0 0
4 1903 0 20990 -778 20222.0 2522 17822 -201120 576480 1 ... 0 80 0.0 100 100 0 50 0 0 0
7 5124 0 15770 477 14817.0 3177 14577 -154320 623280 3 ... 0 0 0.0 0 120 0 0 0 0 0
10 2175 0 20992 -1743 21057.0 2457 19857 -220740 3580860 1 ... 0 90 0.0 0 110 0 20 500 0 600
12 491 0 31203 -220 31460.0 5360 30860 -208500 1519500 4 ... 200 100 0.0 100 70 0 20 0 0 3300

5 rows × 73 columns

Note: In the original paper, the authors used an SQI measure they called jSQI but which appears to be jSQI + wSQI. We were not able to implement the same filter, so the inclusion of sqi_filter.csv simulates the inclusion of this filter. By not excluding cases where the SQI is below the threshold set by the authors, our dataset is noisier than that used by the original authors which will impact performance.

Tracks of Interest¶

These are the subset of tracks (waveforms) for the cases of interest identified above.

In [26]:
# A single case maps to one or more waveform tracks. Select only the tracks required for analysis.
trks_of_interest = trks.loc[cases_of_interest_idx][trks.loc[cases_of_interest_idx]['tname'].isin(TRACK_NAMES)]
trks_of_interest.shape
Out[26]:
(8289, 2)
In [27]:
trks_of_interest.head(n=5)
Out[27]:
tname tid
caseid
1 BIS/EEG1_WAV 0aa685df768489a18a5e9f53af0d83bf60890c73
1 SNUADC/ART 724cdd7184d7886b8f7de091c5b135bd01949959
1 SNUADC/ECG_II 8c9161aaae8cb578e2aa7b60f44234d98d2b3344
4 BIS/EEG1_WAV 1b4c2379be3397a79d3787dd810190150dc53f27
4 SNUADC/ART e28777c4706fe3a5e714bf2d91821d22d782d802
In [28]:
trks_of_interest_idx = trks_of_interest.set_index('tid').index
trks_of_interest_idx.shape
Out[28]:
(8289,)

Build Tracks Cache for Local Processing¶

Tracks data are large and therefore expensive to download every time used. By default, the .vital file format stores all tracks for each case internally. Since only select tracks per case are required, each .vital file can be further reduced by discarding the unused tracks.

In [29]:
# Ensure the full vital file dataset is available for cases of interest.
count_downloaded = 0
count_present = 0

#for i, idx in enumerate(cases.index):
for idx in cases_of_interest_idx:
    full_path = f'{VITAL_ALL}/{idx:04d}.vital'
    if not os.path.isfile(full_path):
        print(f'Missing vital file: {full_path}')
        # Download and save the file.
        vf = vitaldb.VitalFile(idx)
        vf.to_vital(full_path)
        count_downloaded += 1
    else:
        count_present += 1

print()
print(f'Count of cases of interest:           {cases_of_interest_idx.shape[0]}')
print(f'Count of vital files downloaded:      {count_downloaded}')
print(f'Count of vital files already present: {count_present}')
Count of cases of interest:           2763
Count of vital files downloaded:      0
Count of vital files already present: 2763

Validate Mini Files¶

Validate the minified .vital files and check that all of the required data tracks are present. The Vital API does not throw an error when you request a track that does not exist.

In [30]:
# Convert vital files to "mini" versions including only the subset of tracks defined in TRACK_NAMES above.
# Only perform conversion for the cases of interest.
# NOTE: If this cell is interrupted, it can be restarted and will continue where it left off.
count_minified = 0
count_present = 0
count_missing_tracks = 0
count_not_fixable = 0

# If set to true, local mini files are checked for all tracks even if the mini file is already present.
FORCE_VALIDATE = False

for idx in cases_of_interest_idx:
    full_path = f'{VITAL_ALL}/{idx:04d}.vital'
    mini_path = f'{VITAL_MINI}/{idx:04d}_mini.vital'

    if FORCE_VALIDATE or not os.path.isfile(mini_path):
        print(f'Creating mini vital file: {idx}')
        vf = vitaldb.VitalFile(full_path, EXTRACTION_TRACK_NAMES)
        
        if len(vf.get_track_names()) != 4:
            print(f'Missing track in vital file: {idx}, {set(EXTRACTION_TRACK_NAMES).difference(set(vf.get_track_names()))}')
            count_missing_tracks += 1
            
            # Attempt to download from VitalDB directly and see if missing tracks are present.
            vf = vitaldb.VitalFile(idx, EXTRACTION_TRACK_NAMES)
            
            if len(vf.get_track_names()) != 4:
                print(f'Unable to fix missing tracks: {idx}')
                count_not_fixable += 1
                continue
                
            if vf.get_track_samples(EXTRACTION_TRACK_NAMES[0], 1/EXTRACTION_TRACK_SRATES[0]).shape[0] == 0:
                print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[0]}')
                count_not_fixable += 1
                continue
                
            if vf.get_track_samples(EXTRACTION_TRACK_NAMES[1], 1/EXTRACTION_TRACK_SRATES[1]).shape[0] == 0:
                print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[1]}')
                count_not_fixable += 1
                continue
                
            if vf.get_track_samples(EXTRACTION_TRACK_NAMES[2], 1/EXTRACTION_TRACK_SRATES[2]).shape[0] == 0:
                print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[2]}')
                count_not_fixable += 1
                continue

            if vf.get_track_samples(EXTRACTION_TRACK_NAMES[3], 1/EXTRACTION_TRACK_SRATES[3]).shape[0] == 0:
                print(f'Empty track: {idx}, {EXTRACTION_TRACK_NAMES[3]}')
                count_not_fixable += 1
                continue

        vf.to_vital(mini_path)
        count_minified += 1
    else:
        count_present += 1

print()
print(f'Count of cases of interest:           {cases_of_interest_idx.shape[0]}')
print(f'Count of vital files minified:        {count_minified}')
print(f'Count of vital files already present: {count_present}')
print(f'Count of vital files missing tracks:  {count_missing_tracks}')
print(f'Count of vital files not fixable:     {count_not_fixable}')
Count of cases of interest:           2763
Count of vital files minified:        0
Count of vital files already present: 2763
Count of vital files missing tracks:  0
Count of vital files not fixable:     0

Filtering¶

As in the original paper, preprocessing characteristics are different for each of the three signal categories:

  • ABP: no preprocessing, use as-is
  • ECG: apply a 1-40Hz bandpass filter, then perform Z-score normalization
  • EEG: apply a 0.5-50Hz bandpass filter

apply_bandpass_filter() implements the bandpass filter using scipy.signal

In [31]:
def apply_bandpass_filter(data, lowcut, highcut, fs, order=5):
    b, a = butter(order, [lowcut, highcut], fs=fs, btype='band')
    y = lfilter(b, a, np.nan_to_num(data))
    return y

apply_zscore_normalization() implements the Z-score normalization using numpy

In [32]:
def apply_zscore_normalization(signal):
    mean = np.nanmean(signal)
    std = np.nanstd(signal)
    return (signal - mean) / std

Filtering demonstration¶

Demonstrate effects of the filters with pre/post filtering waveforms on a sample case:

In [33]:
caseidx = 1
file_path = f"{VITAL_MINI}/{caseidx:04d}_mini.vital"
vf = vitaldb.VitalFile(file_path, TRACK_NAMES)

originalAbp = None
filteredAbp = None
originalEcg = None
filteredEcg = None
originalEeg = None
filteredEeg = None

ABP_TRACK_NAME = "SNUADC/ART"
ECG_TRACK_NAME = "SNUADC/ECG_II"
EEG_TRACK_NAME = "BIS/EEG1_WAV"

for i, (track_name, rate) in enumerate(zip(TRACK_NAMES, TRACK_SRATES)):
    # Get samples for this track
    track_samples = vf.get_track_samples(track_name, 1/rate)
    #track_samples, _ = vf.get_samples(track_name, 1/rate)
    print(f"Track {track_name} @ {rate}Hz shape {len(track_samples)}")

    if track_name == ABP_TRACK_NAME:
        # ABP waveforms are used without further pre-processing
        originalAbp = track_samples
        filteredAbp = track_samples
    elif track_name == ECG_TRACK_NAME:
        originalEcg = track_samples
        # ECG waveforms are band-pass filtered between 1 and 40 Hz, and Z-score normalized
        # first apply bandpass filter
        filteredEcg = apply_bandpass_filter(track_samples, 1, 40, rate)
        # then do z-score normalization
        filteredEcg = apply_zscore_normalization(filteredEcg)
    elif track_name == EEG_TRACK_NAME:
        # EEG waveforms are band-pass filtered between 0.5 and 50 Hz
        originalEeg = track_samples
        filteredEeg = apply_bandpass_filter(track_samples, 0.5, 50, rate, 2)

def plotSignal(data, title):
    plt.figure(figsize=(20, 5))
    plt.plot(data)
    plt.title(title)
    plt.show()

plotSignal(originalAbp, "Original ABP")
plotSignal(originalAbp, "Unfiltered ABP")
plotSignal(originalEcg, "Original ECG")
plotSignal(filteredEcg, "Filtered ECG")
plotSignal(originalEeg, "Original EEG")
plotSignal(filteredEeg, "Filtered EEG")
Track SNUADC/ART @ 500Hz shape 5771049
Track SNUADC/ECG_II @ 500Hz shape 5771049
Track BIS/EEG1_WAV @ 128Hz shape 1477389

Perform data preprocessing¶

This section performs the actual data preprocessing laid out earlier:

In [34]:
# Preprocess data tracks
ABP_TRACK_NAME = "SNUADC/ART"
ECG_TRACK_NAME = "SNUADC/ECG_II"
EEG_TRACK_NAME = "BIS/EEG1_WAV"
EVENT_TRACK_NAME = "EVENT"
MINI_FILE_FOLDER = VITAL_MINI
CACHE_FILE_FOLDER = VITAL_PREPROCESS_SCRATCH

if RESET_CACHE:
    TRACK_CACHE = None
    SEGMENT_CACHE = None

if TRACK_CACHE is None:
    TRACK_CACHE = {}
    SEGMENT_CACHE = {}

def get_track_data(case, print_when_file_loaded = False):
    parsedFile = None
    abp = None
    eeg = None
    ecg = None
    events = None

    for i, (track_name, rate) in enumerate(zip(EXTRACTION_TRACK_NAMES, EXTRACTION_TRACK_SRATES)):
        # use integer case id and track name, delimited by pipe, as cache key
        cache_label = f"{case}|{track_name}"
        
        if cache_label not in TRACK_CACHE:
            if parsedFile is None:
                file_path = f"{MINI_FILE_FOLDER}/{case:04d}_mini.vital"
                if print_when_file_loaded:
                    print(f"[{datetime.now()}] Loading vital file {file_path}")
                parsedFile = vitaldb.VitalFile(file_path, EXTRACTION_TRACK_NAMES)
            
            dataset = np.array(parsedFile.get_track_samples(track_name, 1/rate))
            
            if track_name == ABP_TRACK_NAME:
                # no filtering for ABP
                abp = dataset
                abp = pd.DataFrame(abp).ffill(axis=0).bfill(axis=0)[0].values
                if USE_MEMORY_CACHING:
                    TRACK_CACHE[cache_label] = abp
            elif track_name == ECG_TRACK_NAME:
                ecg = dataset
                # apply ECG filtering: first bandpass then do z-score normalization
                ecg = pd.DataFrame(ecg).ffill(axis=0).bfill(axis=0)[0].values
                ecg = apply_bandpass_filter(ecg, 1, 40, rate, 2)
                ecg = apply_zscore_normalization(ecg)
                
                if USE_MEMORY_CACHING:
                    TRACK_CACHE[cache_label] = ecg
            elif track_name == EEG_TRACK_NAME:
                eeg = dataset
                eeg = pd.DataFrame(eeg).ffill(axis=0).bfill(axis=0)[0].values
                # apply EEG filtering: bandpass only
                eeg = apply_bandpass_filter(eeg, 0.5, 50, rate, 2)
                if USE_MEMORY_CACHING:
                    TRACK_CACHE[cache_label] = eeg
            elif track_name == EVENT_TRACK_NAME:
                events = dataset
                if USE_MEMORY_CACHING:
                    TRACK_CACHE[cache_label] = events
        else:
            # cache hit, pull from cache
            if track_name == ABP_TRACK_NAME:
                abp = TRACK_CACHE[cache_label]
            elif track_name == ECG_TRACK_NAME:
                ecg = TRACK_CACHE[cache_label]
            elif track_name == EEG_TRACK_NAME:
                eeg = TRACK_CACHE[cache_label]
            elif track_name == EVENT_TRACK_NAME:
                events = TRACK_CACHE[cache_label]

    return (abp, ecg, eeg, events)

# ABP waveforms are used without further pre-processing
# ECG waveforms are band-pass filtered between 1 and 40 Hz, and Z-score normalized
# EEG waveforms are band-pass filtered between 0.5 and 50 Hz
if PRELOADING_CASES:
    # determine disk cache file label
    maxlabel = "ALL"
    if MAX_CASES is not None:
        maxlabel = str(MAX_CASES)
    picklefile = f"{CACHE_FILE_FOLDER}/{PREDICTION_WINDOW}_minutes_MAX{maxlabel}.trackcache"

    for track in tqdm(cases_of_interest_idx):
        # getting track data will cause a cache-check and fill when missing
        # will also apply appropriate filtering per track
        get_track_data(track, False)
    
    print(f"Generated track cache, {len(TRACK_CACHE)} records generated")

Processed data is stored in .h5 files. Define a loader to read this data and return a tuple with the waveform data:

In [35]:
def get_segment_data(file_path):
    abp = None
    eeg = None
    ecg = None

    if USE_MEMORY_CACHING:
        if file_path in SEGMENT_CACHE:
            (abp, ecg, eeg) = SEGMENT_CACHE[file_path]
            return (abp, ecg, eeg)

    try:
        with h5py.File(file_path, 'r') as f:
            abp = np.array(f['abp'])
            ecg = np.array(f['ecg'])
            eeg = np.array(f['eeg'])
        
        abp = np.array(abp)
        eeg = np.array(eeg)
        ecg = np.array(ecg)

        if len(abp) > 30000:
            abp = abp[:30000]
        elif len(abp) < 30000:
            abp = np.resize(abp, (30000))

        if len(ecg) > 30000:
            ecg = ecg[:30000]
        elif len(ecg) < 30000:
            ecg = np.resize(ecg, (30000))

        if len(eeg) > 7680:
            eeg = eeg[:7680]
        elif len(eeg) < 7680:
            eeg = np.resize(eeg, (7680))

        if USE_MEMORY_CACHING:
            SEGMENT_CACHE[file_path] = (abp, ecg, eeg)
    except:
        abp = None
        ecg = None
        eeg = None

    return (abp, ecg, eeg)

The .vital files contain timeseries information before and after the surgery starts, and include a label start where significant events can be indicated. Define a function to read from this track and extract surgery start and end times so that data can be extracted from this period:

In [36]:
def getSurgeryBoundariesInSeconds(event, debug=False):
    eventIndices = np.argwhere(event==event)
    # we are looking for the last index where the string contains 'start
    lastStart = 0
    firstFinish = len(event)-1
    
    # find last start
    for idx in eventIndices:
        if 'started' in event[idx[0]]:
            if debug:
                print(event[idx[0]])
                print(idx[0])
            lastStart = idx[0]
    
    # find first finish
    for idx in eventIndices:
        if 'finish' in event[idx[0]]:
            if debug:
                print(event[idx[0]])
                print(idx[0])

            firstFinish = idx[0]
            break
    
    if debug:
        print(f'lastStart, firstFinish: {lastStart}, {firstFinish}')
    return (lastStart, firstFinish)

Define a function to check if there are extracted segments for this case. If they are not, they will need to be generated:

In [37]:
def areCaseSegmentsCached(caseid):
    seg_folder = f"{VITAL_EXTRACTED_SEGMENTS}/{caseid:04d}"
    return os.path.exists(seg_folder) and len(os.listdir(seg_folder)) > 0

Define a basic signal quality check function for ABP data:

In [38]:
def isAbpSegmentValidNumpy(samples, debug=False):
    valid = True
    if np.isnan(samples).mean() > 0.1:
        valid = False
        if debug:
            print(f">10% NaN")
    elif (samples > 200).any():
        valid = False
        if debug:
            print(f"Presence of BP > 200")
    elif (samples < 30).any():
        valid = False
        if debug:
            print(f"Presence of BP < 30")
    elif np.max(samples) - np.min(samples) < 30:
        if debug:
            print(f"Max - Min test < 30")
        valid = False
    elif (np.abs(np.diff(samples)) > 30).any():  # abrupt change -> noise
        if debug:
            print(f"Abrupt change (noise)")
        valid = False
    
    return valid

Check if the ABP data extracted for a case is valid:

In [39]:
def isAbpSegmentValid(vf, debug=False):
    ABP_ECG_SRATE_HZ = 500
    ABP_TRACK_NAME = "SNUADC/ART"

    samples = np.array(vf.get_track_samples(ABP_TRACK_NAME, 1/ABP_ECG_SRATE_HZ))
    return isAbpSegmentValidNumpy(samples, debug)

Save extracted segments to disk. Use an .h5 format for efficient packing and playback.

In [40]:
def saveCaseSegments(caseid, positiveSegments, negativeSegments, compresslevel=9, debug=False, forceWrite=False):
    if len(positiveSegments) == 0 and len(negativeSegments) == 0:
        # exit early if no events found
        print(f'{caseid}: exit early, no segments to save')
        return

    # event composition
    # predictiveSegmentStart in seconds, predictiveSegmentEnd in seconds, predWindow (0 for negative), abp, ecg, eeg)
    # 0start, 1end, 2predwindow, 3abp, 4ecg, 5eeg

    seg_folder = f"{VITAL_EXTRACTED_SEGMENTS}/{caseid:04d}"
    if not os.path.exists(seg_folder):
        # if directory needs to be created, then there are no cached segments
        os.mkdir(seg_folder)
    else:
        if not forceWrite:
            # exit early if folder already exists, case already produced
            return

    # prior to writing files out, clear existing files
    for filename in os.listdir(seg_folder):
        file_path = os.path.join(seg_folder, filename)
        if debug:
            print(f'deleting: {file_path}')
        try:
            if os.path.isfile(file_path):
                os.unlink(file_path)
        except Exception as e:
            print('Failed to delete %s. Reason: %s' % (file_path, e))
    
    count_pos_saved = 0
    for i in range(0, len(positiveSegments)):
        event = positiveSegments[i]
        startIndex = event[0]
        endIndex = event[1]
        predWindow = event[2]
        abp = event[3]
        #ecg = event[4]
        #eeg = event[5]

        seg_filename = f"{caseid:04d}_{startIndex}_{predWindow:02d}_True.h5"
        seg_fullpath = f"{seg_folder}/{seg_filename}"
        if isAbpSegmentValidNumpy(abp, debug):
            count_pos_saved += 1

            abp = abp.tolist()
            ecg = event[4].tolist()
            eeg = event[5].tolist()
        
            f = h5py.File(seg_fullpath, "w")
            f.create_dataset('abp', data=abp, compression="gzip", compression_opts=compresslevel)
            f.create_dataset('ecg', data=ecg, compression="gzip", compression_opts=compresslevel)
            f.create_dataset('eeg', data=eeg, compression="gzip", compression_opts=compresslevel)
            
            f.flush()
            f.close()
            f = None

            abp = None
            ecg = None
            eeg = None

            # f.create_dataset('label', data=[1], compression="gzip", compression_opts=compresslevel)
            # f.create_dataset('pred_window', data=[event[2]], compression="gzip", compression_opts=compresslevel)
            # f.create_dataset('caseid', data=[caseid], compression="gzip", compression_opts=compresslevel)
        elif debug:
            print(f"{caseid:04d} {predWindow:02d}min {startIndex} starttime = ignored, segment validity issues")

    count_neg_saved = 0
    for i in range(0, len(negativeSegments)):
        event = negativeSegments[i]
        startIndex = event[0]
        endIndex = event[1]
        predWindow = event[2]
        abp = event[3]
        #ecg = event[4]
        #eeg = event[5]

        seg_filename = f"{caseid:04d}_{startIndex}_0_False.h5"
        seg_fullpath = f"{seg_folder}/{seg_filename}"
        if isAbpSegmentValidNumpy(abp, debug):
            count_neg_saved += 1

            abp = abp.tolist()
            ecg = event[4].tolist()
            eeg = event[5].tolist()
            
            f = h5py.File(seg_fullpath, "w")
            f.create_dataset('abp', data=abp, compression="gzip", compression_opts=compresslevel)
            f.create_dataset('ecg', data=ecg, compression="gzip", compression_opts=compresslevel)
            f.create_dataset('eeg', data=eeg, compression="gzip", compression_opts=compresslevel)
            
            f.flush()
            f.close()
            f = None

            abp = None
            ecg = None
            eeg = None

            # f.create_dataset('label', data=[0], compression="gzip", compression_opts=compresslevel)
            # f.create_dataset('pred_window', data=[0], compression="gzip", compression_opts=compresslevel)
            # f.create_dataset('caseid', data=[caseid], compression="gzip", compression_opts=compresslevel)
        elif debug:
            print(f"{caseid:04d} CleanWindow {startIndex} starttime = ignored, segment validity issues")
            
    if count_neg_saved == 0 and count_pos_saved == 0:
        print(f'{caseid}: nothing saved, all segments filtered')

The following method is adapted from the preprocessing block of reference [6] (https://github.com/vitaldb/examples/blob/master/hypotension_art.ipynb)

The approach first finds an interoperative hypotensive event in the ABP waveform. It then backtracks to earlier in the waveform to extract a 60 second segment representing the waveform feature to use as model input. The figure below shows an example of this approach and is reproduced from the VitalDB example notebook referenced above.

Feature segment extraction

Generate hypotensive events

Hypotensive events are defined as a 1-minute interval with sustained ABP of less than 65 mmHg Note: Hypotensive events should be at least 20 minutes apart to minimize potential residual effects from previous events

Generate hypotension non-events

To sample non-events, 30-minute segments where the ABP was above 75 mmHG were selected, and then three one-minute samples of each waveform were obtained from the middle of the segment both occur in extract_segments

In [41]:
def extract_segments(
    cases_of_interest_idx,
    debug=False,
    checkCache=True,
    forceWrite=False,
    returnSegments=False,
    skipInvalidCleanEvents=False,
    skipInvalidIohEvents=False
):
    # Sampling rate for ABP and ECG, Hz. These rates should be the same. Default = 500
    ABP_ECG_SRATE_HZ = 500

    # Sampling rate for EEG. Default = 128
    EEG_SRATE_HZ = 128

    # Final dataset for training and testing the model.
    positiveSegmentsMap = {}
    negativeSegmentsMap = {}
    iohEventsMap = {}
    cleanEventsMap = {}

    # Process each case and extract segments. For each segment identify presence of an event in the label zone.
    count_cases = len(cases_of_interest_idx)

    #for case_count, caseid in tqdm(enumerate(cases_of_interest_idx), total=count_cases):
    for case_count, caseid in enumerate(cases_of_interest_idx):
        if debug:
            print(f'Loading case: {caseid:04d}, ({case_count + 1} of {count_cases})')

        if checkCache and areCaseSegmentsCached(caseid):
            if debug:
                print(f'Skipping case: {caseid:04d}, already cached')
            # skip records we've already cached
            continue

        # read the arterial waveform
        (abp, ecg, eeg, event) = get_track_data(caseid)
        if debug:
            print(f'Length of {TRACK_NAMES[0]}:       {abp.shape[0]}')
            print(f'Length of {TRACK_NAMES[1]}:    {ecg.shape[0]}')
            print(f'Length of {TRACK_NAMES[2]}:     {eeg.shape[0]}')

        (startInSeconds, endInSeconds) = getSurgeryBoundariesInSeconds(event)
        if debug:
            print(f"Event markers indicate that surgery begins at {startInSeconds}s and ends at {endInSeconds}s.")

        #track_length_seconds = int(len(abp) / ABP_ECG_SRATE_HZ)
        track_length_seconds = endInSeconds
        
        if debug:
            print(f"Processing case {caseid} with length {track_length_seconds}s")

        
        # check if the ABP segment in the surgery window is valid
        if debug:
            isSurgerySegmentValid = \
                isAbpSegmentValidNumpy(abp[startInSeconds * ABP_ECG_SRATE_HZ:endInSeconds * ABP_ECG_SRATE_HZ])
            print(f'{caseid}: surgery segment valid: {isSurgerySegmentValid}')
        
        iohEvents = []
        cleanEvents = []
        i = 0
        started = False
        eofReached = False
        trackStartIndex = None

        # set i pointer (which operates in seconds) to start marker for surgery
        i = startInSeconds

        # FIRST PASS
        # in the first forward pass, we are going to identify the start/end boundaries of all IOH events within the case
        ioh_events_valid = []
        
        while i < track_length_seconds - 60 and i < endInSeconds:
            segmentStart = None
            segmentEnd = None
            segFound = False

            # look forward one minute
            abpSeg = abp[i * ABP_ECG_SRATE_HZ:(i + 60) * ABP_ECG_SRATE_HZ]

            # roll forward until we hit a one minute window where mean ABP >= 65 so we know leads are connected and it's tracking
            if not started:
                if np.nanmean(abpSeg) >= 65:
                    started = True
                    trackStartIndex = i
            # if we're started and mean abp for the window is <65, we are starting a new IOH event
            elif np.nanmean(abpSeg) < 65:
                segmentStart = i
                # now seek forward to find end of event, perpetually checking the lats minute of the IOH event
                for j in range(i + 60, track_length_seconds):
                    # look backward one minute
                    abpSegForward = abp[(j - 60) * ABP_ECG_SRATE_HZ:j * ABP_ECG_SRATE_HZ]
                    if np.nanmean(abpSegForward) >= 65:
                        segmentEnd = j - 1
                        break
                if segmentEnd is None:
                    eofReached = True
                else:
                    # otherwise, end of the IOH segment has been reached, record it
                    iohEvents.append((segmentStart, segmentEnd))
                    segFound = True
                    
                    if skipInvalidIohEvents:
                        isIohSegmentValid = isAbpSegmentValidNumpy(abpSeg)
                        ioh_events_valid.append(isIohSegmentValid)
                        if debug:
                            print(f'{caseid}: ioh segment valid: {isIohSegmentValid}, {segmentStart}, {segmentEnd}, {t_abp.shape}')
                    else:
                        ioh_events_valid.append(True)

            i += 1
            if not started:
                continue
            elif eofReached:
                break
            elif segFound:
                i = segmentEnd + 1

        # SECOND PASS
        # in the second forward pass, we are going to identify the start/end boundaries of all non-overlapping 30 minute "clean" windows
        # reuse the 'start of signal' index from our first pass
        if trackStartIndex is None:
            trackStartIndex = startInSeconds
        i = trackStartIndex
        eofReached = False

        clean_events_valid = []
        
        while i < track_length_seconds - 1800 and i < endInSeconds:
            segmentStart = None
            segmentEnd = None
            segFound = False

            startIndex = i
            endIndex = i + 1800

            # check to see if this 30 minute window overlaps any IOH events, if so ffwd to end of latest overlapping IOH
            overlapFound = False
            latestEnd = None
            for event in iohEvents:
                # case 1: starts during an event
                if startIndex >= event[0] and startIndex < event[1]:
                    latestEnd = event[1]
                    overlapFound = True
                # case 2: ends during an event
                elif endIndex >= event[0] and endIndex < event[1]:
                    latestEnd = event[1]
                    overlapFound = True
                # case 3: event occurs entirely inside of the window
                elif startIndex < event[0] and endIndex > event[1]:
                    latestEnd = event[1]
                    overlapFound = True

            # FFWD if we found an overlap
            if overlapFound:
                i = latestEnd + 1
                continue

            # look forward 30 minutes
            abpSeg = abp[startIndex * ABP_ECG_SRATE_HZ:endIndex * ABP_ECG_SRATE_HZ]

            # if we're started and mean abp for the window is >= 75, we are starting a new clean event
            if np.nanmean(abpSeg) >= 75:
                overlapFound = False
                latestEnd = None
                for event in iohEvents:
                    # case 1: starts during an event
                    if startIndex >= event[0] and startIndex < event[1]:
                        latestEnd = event[1]
                        overlapFound = True
                    # case 2: ends during an event
                    elif endIndex >= event[0] and endIndex < event[1]:
                        latestEnd = event[1]
                        overlapFound = True
                    # case 3: event occurs entirely inside of the window
                    elif startIndex < event[0] and endIndex > event[1]:
                        latestEnd = event[1]
                        overlapFound = True

                if not overlapFound:
                    segFound = True
                    segmentEnd = endIndex
                    cleanEvents.append((startIndex, endIndex))
                    
                    if skipInvalidCleanEvents:
                        isCleanSegmentValid = isAbpSegmentValidNumpy(abpSeg)
                        clean_events_valid.append(isCleanSegmentValid)
                        if debug:
                            print(f'{caseid}: clean segment valid: {isCleanSegmentValid}, {startIndex}, {endIndex}, {abpSeg.shape}')
                    else:
                        clean_events_valid.append(True)

            i += 10
            if segFound:
                i = segmentEnd + 1

        if debug:
            print(f"IOH Events for case {caseid}: {iohEvents}")
            print(f"Clean Events for case {caseid}: {cleanEvents}")

        positiveSegments = []
        negativeSegments = []

        # THIRD PASS
        # in the third pass, we will use the collections of ioh event windows to generate our actual extracted segments based on our prediction window (positive labels)
        for i in range(0, len(iohEvents)):
            # Don't extract segments from invalid IOH event windows.
            if not ioh_events_valid[i]:
                continue

            if debug:
                print(f"Checking event {iohEvents[i]}")
            # we want to review current event boundaries, as well as previous event boundaries if available
            event = iohEvents[i]
            previousEvent = None
            if i > 0:
                previousEvent = iohEvents[i - 1]

            for predWindow in ALL_PREDICTION_WINDOWS:
                if debug:
                    print(f"Checking event {iohEvents[i]} for pred {predWindow}")
                iohEventStart = event[0]
                predictiveSegmentEnd = event[0] - (predWindow*60)
                predictiveSegmentStart = predictiveSegmentEnd - 60

                if (predictiveSegmentStart < 0):
                    # don't rewind before the beginning of the track
                    if debug:
                        print(f"Checking event {iohEvents[i]} for pred {predWindow} - exit, before beginning")
                    continue
                elif (predictiveSegmentStart < trackStartIndex):
                    # don't rewind before the beginning of signal in track
                    if debug:
                        print(f"Checking event {iohEvents[i]} for pred {predWindow} - exit, before track start")
                    continue
                elif previousEvent is not None:
                    # does this event window come before or during the previous event?
                    overlapFound = False
                    # case 1: starts during an event
                    if predictiveSegmentStart >= previousEvent[0] and predictiveSegmentStart < previousEvent[1]:
                        overlapFound = True
                    # case 2: ends during an event
                    elif iohEventStart >= previousEvent[0] and iohEventStart < previousEvent[1]:
                        overlapFound = True
                    # case 3: event occurs entirely inside of the window
                    elif predictiveSegmentStart < previousEvent[0] and iohEventStart > previousEvent[1]:
                        overlapFound = True
                    # do not extract a case if we overlap witha nother IOH
                    if overlapFound:
                        if debug:
                            print(f"Checking event {iohEvents[i]} for pred {predWindow} - exit, overlap with earlier segment")
                        continue

                # track the positive segment
                positiveSegments.append((predictiveSegmentStart, predictiveSegmentEnd, predWindow,
                    abp[predictiveSegmentStart*ABP_ECG_SRATE_HZ:predictiveSegmentEnd*ABP_ECG_SRATE_HZ],
                    ecg[predictiveSegmentStart*ABP_ECG_SRATE_HZ:predictiveSegmentEnd*ABP_ECG_SRATE_HZ],
                    eeg[predictiveSegmentStart*EEG_SRATE_HZ:predictiveSegmentEnd*EEG_SRATE_HZ]))

        # FOURTH PASS
        # in the fourth and final pass, we will use the collections of clean event windows to generate our actual extracted segments based (negative labels)
        for i in range(0, len(cleanEvents)):
            # Don't extract segments from invalid clean event windows.
            if not clean_events_valid[i]:
                continue
            
            # everything will be 30 minutes long at least
            event = cleanEvents[i]
            # choose sample 1 @ 10 minutes
            # choose sample 2 @ 15 minutes
            # choose sample 3 @ 20 minutes
            timeAtTen = event[0] + 600
            timeAtFifteen = event[0] + 900
            timeAtTwenty = event[0] + 1200

            negativeSegments.append((timeAtTen, timeAtTen + 60, 0,
                                   abp[timeAtTen*ABP_ECG_SRATE_HZ:(timeAtTen + 60)*ABP_ECG_SRATE_HZ],
                                   ecg[timeAtTen*ABP_ECG_SRATE_HZ:(timeAtTen + 60)*ABP_ECG_SRATE_HZ],
                                   eeg[timeAtTen*EEG_SRATE_HZ:(timeAtTen + 60)*EEG_SRATE_HZ]))
            negativeSegments.append((timeAtFifteen, timeAtFifteen + 60, 0,
                                   abp[timeAtFifteen*ABP_ECG_SRATE_HZ:(timeAtFifteen + 60)*ABP_ECG_SRATE_HZ],
                                   ecg[timeAtFifteen*ABP_ECG_SRATE_HZ:(timeAtFifteen + 60)*ABP_ECG_SRATE_HZ],
                                   eeg[timeAtFifteen*EEG_SRATE_HZ:(timeAtFifteen + 60)*EEG_SRATE_HZ]))
            negativeSegments.append((timeAtTwenty, timeAtTwenty + 60, 0,
                                   abp[timeAtTwenty*ABP_ECG_SRATE_HZ:(timeAtTwenty + 60)*ABP_ECG_SRATE_HZ],
                                   ecg[timeAtTwenty*ABP_ECG_SRATE_HZ:(timeAtTwenty + 60)*ABP_ECG_SRATE_HZ],
                                   eeg[timeAtTwenty*EEG_SRATE_HZ:(timeAtTwenty + 60)*EEG_SRATE_HZ]))

        if returnSegments:
            positiveSegmentsMap[caseid] = positiveSegments
            negativeSegmentsMap[caseid] = negativeSegments
            iohEventsMap[caseid] = iohEvents
            cleanEventsMap[caseid] = cleanEvents
        
        saveCaseSegments(caseid, positiveSegments, negativeSegments, 9, debug=debug, forceWrite=forceWrite)

        #if debug:
        print(f'{caseid}: positiveSegments: {len(positiveSegments)}, negativeSegments: {len(negativeSegments)}')

    return positiveSegmentsMap, negativeSegmentsMap, iohEventsMap, cleanEventsMap

Case Extraction - Generage Segments Needed For Training¶

Ensure that all needed segments are in place for the cases that are being used. If data is already stored on disk this method returns immediately.

In [42]:
MANUAL_EXTRACT=True
SKIP_INVALID_CLEAN_EVENTS=True
SKIP_INVALID_IOH_EVENTS=True

if MANUAL_EXTRACT:
    mycoi = cases_of_interest_idx
    #mycoi = cases_of_interest_idx[:2800]
    #mycoi = [1]

    cnt = 0
    mod = 0
    for ci in mycoi:
        cnt += 1
        if mod % 100 == 0:
            print(f'count processed: {mod}, current case index: {ci}')
        try:
            p, n, i, c = extract_segments([ci], debug=False, checkCache=True, 
                                          forceWrite=True, returnSegments=False, 
                                          skipInvalidCleanEvents=SKIP_INVALID_CLEAN_EVENTS,
                                          skipInvalidIohEvents=SKIP_INVALID_IOH_EVENTS)
            p = None
            n = None
            i = None
            c = None
        except:
            print(f'error on extract segment: {ci}')
        mod += 1
    print(f'extracted: {cnt}')
count processed: 0, current case index: 1
count processed: 100, current case index: 229
count processed: 200, current case index: 481
count processed: 300, current case index: 740
count processed: 400, current case index: 954
count processed: 500, current case index: 1160
count processed: 600, current case index: 1367
count processed: 700, current case index: 1595
count processed: 800, current case index: 1822
count processed: 900, current case index: 2055
count processed: 1000, current case index: 2317
count processed: 1100, current case index: 2533
count processed: 1200, current case index: 2775
count processed: 1300, current case index: 3014
count processed: 1400, current case index: 3218
count processed: 1500, current case index: 3442
count processed: 1600, current case index: 3682
count processed: 1700, current case index: 3879
count processed: 1800, current case index: 4109
count processed: 1900, current case index: 4347
count processed: 2000, current case index: 4603
count processed: 2100, current case index: 4830
count processed: 2200, current case index: 5072
count processed: 2300, current case index: 5314
count processed: 2400, current case index: 5568
count processed: 2500, current case index: 5793
count processed: 2600, current case index: 6017
count processed: 2700, current case index: 6248
extracted: 2763

Track and Segment Validity Checks¶

In [43]:
def printAbp(case_id_to_check, plot_invalid_only=False):
        vf_path = f'{VITAL_MINI}/{case_id_to_check:04d}_mini.vital'
        
        if not os.path.isfile(vf_path):
              return
        
        vf = vitaldb.VitalFile(vf_path)
        abp = vf.to_numpy(TRACK_NAMES[0], 1/500)
        
        print(f'Case {case_id_to_check}')
        print(f'ABP Shape: {abp.shape}')

        print(f'nanmin: {np.nanmin(abp)}')
        print(f'nanmean: {np.nanmean(abp)}')
        print(f'nanmax: {np.nanmax(abp)}')
        
        is_valid = isAbpSegmentValidNumpy(abp, debug=True)
        print(f'valid: {is_valid}')

        if plot_invalid_only and is_valid:
            return
            
        plt.figure(figsize=(20, 5))
        plt_color = 'C0' if is_valid else 'red'
        plt.plot(abp, plt_color)
        plt.title(f'ABP - Entire Track - Case {case_id_to_check} - {abp.shape[0] / 500} seconds')
        plt.axhline(y = 65, color = 'maroon', linestyle = '--')
        plt.show()
In [44]:
def printSegments(segmentsMap, case_id_to_check, print_label, normalize=False):
    for (x1, x2, r, abp, ecg, eeg) in segmentsMap[case_id_to_check]:
        print(f'{print_label}: Case {case_id_to_check}')
        print(f'lookback window: {r} min')
        print(f'start time: {x1}')
        print(f'end time: {x2}')
        print(f'length: {x2 - x1} sec')
        
        print(f'ABP Shape: {abp.shape}')
        print(f'ECG Shape: {ecg.shape}')
        print(f'EEG Shape: {eeg.shape}')

        print(f'nanmin: {np.nanmin(abp)}')
        print(f'nanmean: {np.nanmean(abp)}')
        print(f'nanmax: {np.nanmax(abp)}')
        
        is_valid = isAbpSegmentValidNumpy(abp, debug=True)
        print(f'valid: {is_valid}')

        # ABP normalization
        x_abp = np.copy(abp)
        if normalize:
            x_abp -= 65
            x_abp /= 65

        plt.figure(figsize=(20, 5))
        plt_color = 'C0' if is_valid else 'red'
        plt.plot(x_abp, plt_color)
        plt.title('ABP')
        plt.axhline(y = 65, color = 'maroon', linestyle = '--')
        plt.show()

        plt.figure(figsize=(20, 5))
        plt.plot(ecg, 'teal')
        plt.title('ECG')
        plt.show()

        plt.figure(figsize=(20, 5))
        plt.plot(eeg, 'indigo')
        plt.title('EEG')
        plt.show()

        print()
In [45]:
def printEvents(abp_raw, eventsMap, case_id_to_check, print_label, normalize=False):
    for (x1, x2) in eventsMap[case_id_to_check]:
        print(f'{print_label}: Case {case_id_to_check}')
        print(f'start time: {x1}')
        print(f'end time: {x2}')
        print(f'length: {x2 - x1} sec')

        abp = abp_raw[x1*500:x2*500]
        print(f'ABP Shape: {abp.shape}')

        print(f'nanmin: {np.nanmin(abp)}')
        print(f'nanmean: {np.nanmean(abp)}')
        print(f'nanmax: {np.nanmax(abp)}')
        
        is_valid = isAbpSegmentValidNumpy(abp, debug=True)
        print(f'valid: {is_valid}')

        # ABP normalization
        x_abp = np.copy(abp)
        if normalize:
            x_abp -= 65
            x_abp /= 65

        plt.figure(figsize=(20, 5))
        plt_color = 'C0' if is_valid else 'red'
        plt.plot(x_abp, plt_color)
        plt.title('ABP')
        plt.axhline(y = 65, color = 'maroon', linestyle = '--')
        plt.show()

        print()
In [46]:
def moving_average(x, seconds=60):
    w = seconds * 500
    return np.convolve(np.squeeze(x), np.ones(w), 'valid') / w
In [47]:
def printAbpOverlay(
    case_id_to_check,
    positiveSegmentsMap,
    negativeSegmentsMap,
    iohEventsMap,
    cleanEventsMap,
    movingAverage=False
):
    def overlay_segments(plt, segmentsMap, color, linestyle, positive=False):
        for (x1, x2, r, abp, ecg, eeg) in segmentsMap:
            sx1 = x1*500
            sx2 = x2*500
            mycolor = color
            if positive:
                if r == 3:
                    mycolor = 'red'
                elif r == 5:
                    mycolor = 'crimson'
                elif r == 10:
                    mycolor = 'tomato'
                else:
                    mycolor = 'salmon'
            plt.axvline(x = sx1, color = mycolor, linestyle = linestyle)
            plt.axvline(x = sx2, color = mycolor, linestyle = linestyle)
            plt.axvspan(sx1, sx2, facecolor = mycolor, alpha = 0.1)

    def overlay_events(plt, abp, eventsMap, opstart, opend, color, linestyle):
        for (x1, x2) in eventsMap:
            sx1 = x1*500
            sx2 = x2*500
            # only plot valid events
            if isAbpSegmentValidNumpy(abp[sx1:sx2]):
                # that are within the operating start and end times
                if sx1 >= opstart and sx2 <= opend:
                    plt.axvline(x = sx1, color = color, linestyle = linestyle)
                    plt.axvline(x = sx2, color = color, linestyle = linestyle)
                    plt.axvspan(sx1, sx2, facecolor = color, alpha = 0.1)

    vf_path = f'{VITAL_MINI}/{case_id_to_check:04d}_mini.vital'

    if not os.path.isfile(vf_path):
          return

    vf = vitaldb.VitalFile(vf_path)
    abp = vf.to_numpy(TRACK_NAMES[0], 1/500)

    print(f'Case {case_id_to_check}')
    print(f'ABP Shape: {abp.shape}')

    print(f'nanmin: {np.nanmin(abp)}')
    print(f'nanmean: {np.nanmean(abp)}')
    print(f'nanmax: {np.nanmax(abp)}')

    #is_valid = isAbpSegmentValidNumpy(abp, debug=True)
    #print(f'valid: {is_valid}')

    plt.figure(figsize=(24, 8))
    plt_color = 'C0' #if is_valid else 'red'
    plt.plot(abp, plt_color)
    plt.title(f'ABP - Entire Track - Case {case_id_to_check} - {abp.shape[0] / 500} seconds')
    plt.axhline(y = 65, color = 'maroon', linestyle = '--')

    # https://matplotlib.org/stable/gallery/lines_bars_and_markers/linestyles.html#linestyles
    
    opstart = cases.loc[case_id_to_check]['opstart'].item() * 500
    plt.axvline(x = opstart, color = 'black', linestyle = '--', linewidth=2)
    plt.text(opstart - 600000, -200, f'Operation Start', fontsize=15)
    
    opend = cases.loc[case_id_to_check]['opend'].item() * 500
    plt.axvline(x = opend, color = 'black', linestyle = '--', linewidth=2)
    plt.text(opend + 50000, -200, r'Operation End', fontsize=15)
    
    overlay_segments(plt, positiveSegmentsMap[case_id_to_check], 'crimson', (0, (1, 1)), positive=True)
    
    overlay_segments(plt, negativeSegmentsMap[case_id_to_check], 'teal', (0, (1, 1)))

    overlay_events(plt, abp, iohEventsMap[case_id_to_check], opstart, opend, 'brown', '-')
    
    overlay_events(plt, abp, cleanEventsMap[case_id_to_check], opstart, opend, 'teal', '-')
    
    abp_mov_avg = None
    if movingAverage:
        abp_mov_avg = moving_average(abp[opstart:(opend + 60*500)])
        myx = np.arange(opstart, opstart + len(abp_mov_avg), 1)
        plt.plot(myx, abp_mov_avg, 'red')

    plt.show()

Reality Check All Cases¶

In [48]:
# Global flag to control creating track and segment plots.
# These plots are expensive to create, but very interesting.
# Disable when training in bulk to speed up notebook processing.
PERFORM_TRACK_VALIDITY_CHECKS = True
In [49]:
# Check if all ABPs are well formed. Fast load and scan of the raw track data for ABP.
DISPLAY_REALITY_CHECK_ABP=True
DISPLAY_REALITY_CHECK_ABP_FIRST_ONLY=True

if PERFORM_TRACK_VALIDITY_CHECKS and DISPLAY_REALITY_CHECK_ABP:
    for case_id_to_check in cases_of_interest_idx:
        printAbp(case_id_to_check, plot_invalid_only=False)
        
        if DISPLAY_REALITY_CHECK_ABP_FIRST_ONLY:
            break
Case 1
ABP Shape: (5771049, 1)
nanmin: -495.6260070800781
nanmean: 78.15254211425781
nanmax: 374.3236389160156
Presence of BP > 200
valid: False

Validate Malformed Vital Files - Missing One Or More Tracks¶

Cases which were found to be missing one or more data tracks are stored in malformed_tracks_filter.csv. These can be analyzed below:

In [50]:
# These are Vital Files removed because of malformed ABP waveforms.
DISPLAY_MALFORMED_ABP=True
DISPLAY_MALFORMED_ABP_FIRST_ONLY=True

if PERFORM_TRACK_VALIDITY_CHECKS and DISPLAY_MALFORMED_ABP:
    malformed_case_ids = pd.read_csv('malformed_tracks_filter.csv', header=None, names=['caseid']).set_index('caseid').index

    for case_id_to_check in malformed_case_ids:
        printAbp(case_id_to_check)
        
        if DISPLAY_MALFORMED_ABP_FIRST_ONLY:
            break

Validate Cases With No Segments Saved¶

Cases which were found to not result in any extracted segments can be analyzed below to better understand why:

In [51]:
DISPLAY_NO_SEGMENTS_CASES=True
DISPLAY_NO_SEGMENTS_CASES_FIRST_ONLY=True

if PERFORM_TRACK_VALIDITY_CHECKS and DISPLAY_NO_SEGMENTS_CASES:
    no_segments_case_ids = [3413, 3476, 3533, 3992, 4328, 4648, 4703, 4733, 5130, 5501, 5693, 5908]

    for case_id_to_check in no_segments_case_ids:
        printAbp(case_id_to_check)
        
        if DISPLAY_NO_SEGMENTS_CASES_FIRST_ONLY:
            break
Case 3413
ABP Shape: (3430848, 1)
nanmin: -228.025146484375
nanmean: 48.44272232055664
nanmax: 293.3521423339844
>10% NaN
valid: False

Select Case For Segment Extraction Validation¶

Generate segment data for one or more cases. Perform a deep analysis of event and segment quality.

In [52]:
# NOTE: This is always set so that if this section of checks is skipped, the model prediction plots will match.
my_cases_of_interest_idx = [84, 198, 60, 16, 27]

# Note: By default, match extract segments processing block above.
# However, regenerate data real time to allow seeing impacts on segment extraction.
# This is why both checkCache and forceWrite are false by default.
positiveSegmentsMap, negativeSegmentsMap, iohEventsMap, cleanEventsMap = None, None, None, None

if PERFORM_TRACK_VALIDITY_CHECKS:
    positiveSegmentsMap, negativeSegmentsMap, iohEventsMap, cleanEventsMap = \
        extract_segments(my_cases_of_interest_idx, debug=False,
                         checkCache=False, forceWrite=False, returnSegments=True,
                         skipInvalidCleanEvents=SKIP_INVALID_CLEAN_EVENTS,
                         skipInvalidIohEvents=SKIP_INVALID_IOH_EVENTS)
84: positiveSegments: 4, negativeSegments: 15
198: positiveSegments: 4, negativeSegments: 12
60: positiveSegments: 4, negativeSegments: 3
16: positiveSegments: 8, negativeSegments: 6
27: positiveSegments: 8, negativeSegments: 12

Select a specific case to perform detailed low level analysis.

In [53]:
case_id_to_check = my_cases_of_interest_idx[0]
print(case_id_to_check)
print()

if PERFORM_TRACK_VALIDITY_CHECKS:
    print((
        len(positiveSegmentsMap[case_id_to_check]),
        len(negativeSegmentsMap[case_id_to_check]),
        len(iohEventsMap[case_id_to_check]),
        len(cleanEventsMap[case_id_to_check])
    ))
84

(4, 15, 2, 7)
In [54]:
if PERFORM_TRACK_VALIDITY_CHECKS:
    printAbp(case_id_to_check)
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688
Presence of BP > 200
valid: False

Positive Events for Case - IOH Events¶

Used to define the range in front of which positive segments will be extracted. Positive samples happen in front of this region.

In [55]:
tmp_abp = None

if PERFORM_TRACK_VALIDITY_CHECKS:
    tmp_vf_path = f'{VITAL_MINI}/{case_id_to_check:04d}_mini.vital'
    tmp_vf = vitaldb.VitalFile(tmp_vf_path)
    tmp_abp = tmp_vf.to_numpy(TRACK_NAMES[0], 1/500)
In [56]:
if PERFORM_TRACK_VALIDITY_CHECKS:
    printEvents(tmp_abp, iohEventsMap, case_id_to_check, 'IOH Event Segment', normalize=False)
IOH Event Segment: Case 84
start time: 10651
end time: 10903
length: 252 sec
ABP Shape: (126000, 1)
nanmin: 41.550628662109375
nanmean: 61.8976936340332
nanmax: 99.81057739257812
valid: True
IOH Event Segment: Case 84
start time: 10916
end time: 11030
length: 114 sec
ABP Shape: (57000, 1)
nanmin: -122.36724853515625
nanmean: 66.4285888671875
nanmax: 153.13327026367188
Presence of BP < 30
valid: False

Negative Events for Case - Non-IOH Events¶

Used to define the range from in which negative segments will be extracted. Negative samples happen within this region.

In [57]:
if PERFORM_TRACK_VALIDITY_CHECKS:
    printEvents(tmp_abp, cleanEventsMap, case_id_to_check, 'Clean Event Segment', normalize=False)
Clean Event Segment: Case 84
start time: 2396
end time: 4196
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: 34.638397216796875
nanmean: 96.14398193359375
nanmax: 163.00784301757812
valid: True
Clean Event Segment: Case 84
start time: 4197
end time: 5997
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: 59.324859619140625
nanmean: 90.35917663574219
nanmax: 145.23361206054688
valid: True
Clean Event Segment: Case 84
start time: 5998
end time: 7798
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: 30.688568115234375
nanmean: 84.37336730957031
nanmax: 137.33395385742188
valid: True
Clean Event Segment: Case 84
start time: 7799
end time: 9599
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: 43.525543212890625
nanmean: 85.18022918701172
nanmax: 144.24612426757812
valid: True
Clean Event Segment: Case 84
start time: 11031
end time: 12831
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: -495.6260070800781
nanmean: 86.6280746459961
nanmax: 147.20852661132812
Presence of BP < 30
valid: False
Clean Event Segment: Case 84
start time: 12832
end time: 14632
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: -29.546295166015625
nanmean: 88.14582061767578
nanmax: 169.92001342773438
Presence of BP < 30
valid: False
Clean Event Segment: Case 84
start time: 14633
end time: 16433
length: 1800 sec
ABP Shape: (900000, 1)
nanmin: 50.437713623046875
nanmean: 86.21431732177734
nanmax: 140.29629516601562
valid: True

Positive Segments for Case - IOH Events Predicted Using These¶

One minute regions sampled and used for training the model for "positive" events.

In [58]:
if PERFORM_TRACK_VALIDITY_CHECKS:
    printSegments(positiveSegmentsMap, case_id_to_check, 'Positive Segment - IOH Event', normalize=False)
Positive Segment - IOH Event: Case 84
lookback window: 3 min
start time: 10411
end time: 10471
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 64.26211547851562
nanmean: 88.7354965209961
nanmax: 125.48446655273438
valid: True
Positive Segment - IOH Event: Case 84
lookback window: 5 min
start time: 10291
end time: 10351
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 61.299774169921875
nanmean: 86.20820617675781
nanmax: 122.52212524414062
valid: True
Positive Segment - IOH Event: Case 84
lookback window: 10 min
start time: 9991
end time: 10051
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 57.349945068359375
nanmean: 84.10186004638672
nanmax: 119.55972290039062
valid: True
Positive Segment - IOH Event: Case 84
lookback window: 15 min
start time: 9691
end time: 9751
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 62.287200927734375
nanmean: 87.69979095458984
nanmax: 126.47195434570312
valid: True

Negative Segments for Case - Non-IOH Events Predicted Using These¶

One minute regions sampled and used for training the model for "negative" events.

In [59]:
if PERFORM_TRACK_VALIDITY_CHECKS:
    printSegments(negativeSegmentsMap, case_id_to_check, 'Negative Segment - Non-Event', normalize=False)
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 2996
end time: 3056
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 69.19943237304688
nanmean: 97.4190673828125
nanmax: 140.29629516601562
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 3296
end time: 3356
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 69.19943237304688
nanmean: 94.19501495361328
nanmax: 133.38412475585938
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 3596
end time: 3656
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 67.22451782226562
nanmean: 95.66307830810547
nanmax: 137.33395385742188
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 4797
end time: 4857
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 75.12417602539062
nanmean: 101.20699310302734
nanmax: 145.23361206054688
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 5097
end time: 5157
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 61.299774169921875
nanmean: 83.68433380126953
nanmax: 120.54721069335938
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 5397
end time: 5457
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 61.299774169921875
nanmean: 82.43463134765625
nanmax: 119.55972290039062
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 6598
end time: 6658
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 60.312286376953125
nanmean: 82.77767181396484
nanmax: 118.57229614257812
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 6898
end time: 6958
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 64.26211547851562
nanmean: 87.16991424560547
nanmax: 125.48446655273438
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 7198
end time: 7258
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 61.299774169921875
nanmean: 82.817138671875
nanmax: 119.55972290039062
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 8399
end time: 8459
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 61.299774169921875
nanmean: 86.23184204101562
nanmax: 121.53463745117188
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 8699
end time: 8759
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 53.400115966796875
nanmean: 70.72852325439453
nanmax: 100.79806518554688
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 8999
end time: 9059
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 57.349945068359375
nanmean: 76.20519256591797
nanmax: 106.72280883789062
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 15233
end time: 15293
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 57.349945068359375
nanmean: 82.36672973632812
nanmax: 120.54721069335938
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 15533
end time: 15593
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 58.337371826171875
nanmean: 91.39106750488281
nanmax: 135.35903930664062
valid: True
Negative Segment - Non-Event: Case 84
lookback window: 0 min
start time: 15833
end time: 15893
length: 60 sec
ABP Shape: (30000,)
ECG Shape: (30000,)
EEG Shape: (7680,)
nanmin: 53.400115966796875
nanmean: 81.62761688232422
nanmax: 122.52212524414062
valid: True

Overlay Plot of All Events and Segments Extracted¶

For each of the cases in my_cases_of_interest_idx overlay the results of event and segment extraction.

In [60]:
DISPLAY_OVERLAY_CHECK_ABP=True
DISPLAY_OVERLAY_CHECK_ABP_FIRST_ONLY=False

if PERFORM_TRACK_VALIDITY_CHECKS and DISPLAY_OVERLAY_CHECK_ABP:
    for case_id_to_check in my_cases_of_interest_idx:
        printAbpOverlay(case_id_to_check, positiveSegmentsMap, 
                        negativeSegmentsMap, iohEventsMap, cleanEventsMap, movingAverage=True)
        
        if DISPLAY_OVERLAY_CHECK_ABP_FIRST_ONLY:
            break
Case 84
ABP Shape: (8856936, 1)
nanmin: -495.6260070800781
nanmean: 81.66030883789062
nanmax: 221.26779174804688
Case 198
ABP Shape: (7512656, 1)
nanmin: -495.6260070800781
nanmean: 77.90882110595703
nanmax: 243.97933959960938
Case 60
ABP Shape: (7144543, 1)
nanmin: -495.6260070800781
nanmean: 71.52835845947266
nanmax: 370.3738098144531
Case 16
ABP Shape: (6433641, 1)
nanmin: -495.6260070800781
nanmean: 79.41278839111328
nanmax: 406.9096984863281
Case 27
ABP Shape: (8934972, 1)
nanmin: -495.6260070800781
nanmean: 74.25575256347656
nanmax: 296.3145446777344
In [61]:
# Memory cleanup
del tmp_abp

Generate Train/Val/Test Splits¶

When case segments are stored to disk, the filename is intentionally constructed so that its metadata can be easily reconstructed. The format is as follows: {case}_{startX}_{predWindow}_{label}.h5, where {case} is the case ID, {startX} is the start index of the segment, in seconds, from the start of the .vital track, {predWindow} is the prediction window, which can be 3, 5, 10 or 15 minutes, and {label} is the label indicator of whether the segment is associated with a hypotensive event (label=1) or not (label=0).

In [62]:
def get_segment_attributes_from_filename(file_path):
    pieces = os.path.basename(file_path).split('_')
    case = int(pieces[0])
    startX = int(pieces[1])
    predWindow = int(pieces[2])
    label = pieces[3].replace('.h5', '')
    return (case, startX, predWindow, label)
In [63]:
count_negative_samples = 0
count_positive_samples = 0

samples = []

seg_folder = f"{VITAL_EXTRACTED_SEGMENTS}"
filenames = [y for x in os.walk(seg_folder) for y in glob(os.path.join(x[0], '*.h5'))]

for filename in filenames:
    (case, start_x, pred_window, label) = get_segment_attributes_from_filename(filename)
    
    # only load cases for cases of interest; this folder could have segments for hundreds of cases
    if case not in cases_of_interest_idx:
        continue

    if pred_window == 0 or pred_window == PREDICTION_WINDOW or PREDICTION_WINDOW == 'ALL':
        #print((case, start_x, pred_window, label))
        if label == 'True':
            count_positive_samples += 1
        else:
            count_negative_samples += 1
        sample = (filename, label)
        samples.append(sample)

print()
print(f"samples loaded:         {len(samples):5} ")
print(f'count negative samples: {count_negative_samples:5}')
print(f'count positive samples: {count_positive_samples:5}')
samples loaded:         19676 
count negative samples: 14298
count positive samples:  5378
In [64]:
# Divide by cases
sample_cases = defaultdict(lambda: []) 

for fn, _ in samples:
    (case, start_x, pred_window, label) = get_segment_attributes_from_filename(fn)
    sample_cases[case].append((fn, label))

# understand any missing cases of interest
sample_cases_idx = pd.Index(sample_cases.keys())
missing_case_ids = cases_of_interest_idx.difference(sample_cases_idx)
print(f'cases with no samples: {missing_case_ids.shape[0]}')
print(f'    {missing_case_ids}')
cases with no samples: 20
    Index([ 149,  561,  864,  979, 1158, 1174, 1317, 1957, 2221, 2830, 2859, 4380,
       4755, 4783, 5080, 5204, 5266, 5755, 6275, 6360],
      dtype='int64')

Split data into training, validation, and test sets¶

Use 6:1:3 ratio and prevent samples from a single case from being split across different sets

Note: number of samples at each time point is not the same, because the first event can occur before the 3/5/10/15 minute mark

In [65]:
# Set target sizes
train_ratio = 0.6
val_ratio = 0.1
test_ratio = 1 - train_ratio - val_ratio # ensure ratios sum to 1

# Split samples into train and other
sample_cases_train, sample_cases_other = train_test_split(list(sample_cases.keys()), test_size=(1 - train_ratio), random_state=RANDOM_SEED)

# Split other into val and test
sample_cases_val, sample_cases_test = train_test_split(sample_cases_other, test_size=(test_ratio / (1 - train_ratio)), random_state=RANDOM_SEED)

# Check how many samples are in each set
print(f'Train/Val/Test Summary by Cases')
print(f"Train cases:  {len(sample_cases_train):5}, ({len(sample_cases_train) / len(sample_cases):.2%})")
print(f"Val cases:    {len(sample_cases_val):5}, ({len(sample_cases_val) / len(sample_cases):.2%})")
print(f"Test cases:   {len(sample_cases_test):5}, ({len(sample_cases_test) / len(sample_cases):.2%})")
print(f"Total cases:  {(len(sample_cases_train) + len(sample_cases_val) + len(sample_cases_test)):5}")
Train/Val/Test Summary by Cases
Train cases:   1645, (59.97%)
Val cases:      274, (9.99%)
Test cases:     824, (30.04%)
Total cases:   2743

Now that the cases have been split according to the desired ratio, assign all of the segments for each case into the target (train, validation, test) set:

In [66]:
sample_cases_train = set(sample_cases_train)
sample_cases_val = set(sample_cases_val)
sample_cases_test = set(sample_cases_test)

samples_train = []
samples_val = []
samples_test = []

for cid, segs in sample_cases.items():
    if cid in sample_cases_train:
        for seg in segs:
            samples_train.append(seg)
    if cid in sample_cases_val:
        for seg in segs:
            samples_val.append(seg)
    if cid in sample_cases_test:
        for seg in segs:
            samples_test.append(seg)
            
# Check how many samples are in each set
print(f'Train/Val/Test Summary by Events')
print(f"Train events:  {len(samples_train):5}, ({len(samples_train) / len(samples):.2%})")
print(f"Val events:    {len(samples_val):5}, ({len(samples_val) / len(samples):.2%})")
print(f"Test events:   {len(samples_test):5}, ({len(samples_test) / len(samples):.2%})")
print(f"Total events:  {(len(samples_train) + len(samples_val) + len(samples_test)):5}")
Train/Val/Test Summary by Events
Train events:  11725, (59.59%)
Val events:     2013, (10.23%)
Test events:    5938, (30.18%)
Total events:  19676

Validate train/val/test Splits¶

Verify the label distribution in each set:

In [67]:
PRINT_ALL_CASE_SPLIT_DETAILS = False

case_to_sample_distribution = defaultdict(lambda: {'train': [0, 0], 'val': [0, 0], 'test': [0, 0]})

def populate_case_to_sample_distribution(mysamples, idx):
    neg = 0
    pos = 0
    
    for fn, _ in mysamples:
        (case, start_x, pred_window, label) = get_segment_attributes_from_filename(fn)
        slot = 0 if label == 'False' else 1
        case_to_sample_distribution[case][idx][slot] += 1
        if slot == 0:
            neg += 1
        else:
            pos += 1
                
    return (neg, pos)

train_neg, train_pos = populate_case_to_sample_distribution(samples_train, 'train')
val_neg, val_pos     = populate_case_to_sample_distribution(samples_val,   'val')
test_neg, test_pos   = populate_case_to_sample_distribution(samples_test,  'test')

print(f'Total Cases Present: {len(case_to_sample_distribution):5}')
print()

train_tot = train_pos + train_neg
val_tot = val_pos + val_neg
test_tot = test_pos + test_neg
print(f'Train: P: {train_pos:5} ({(train_pos/train_tot):.2}), N: {train_neg:5} ({(train_neg/train_tot):.2})')
print(f'Val:   P: {val_pos:5} ({(val_pos/val_tot):.2}), N: {val_neg:5} ({(val_neg/val_tot):.2})')
print(f'Test:  P: {test_pos:5} ({(test_pos/test_tot):.2}), N: {test_neg:5}  ({(test_neg/test_tot):.2})')
print()

total_pos = train_pos + val_pos + test_pos
total_neg = train_neg + val_neg + test_neg
total = total_pos + total_neg
print(f'P/N Ratio: {(total_pos)}:{(total_neg)}')
print(f'P Percent: {(total_pos/total):.2}')
print(f'N Percent: {(total_neg/total):.2}')
print()

if PRINT_ALL_CASE_SPLIT_DETAILS:
    for ci in sorted(case_to_sample_distribution.keys()):
        print(f'{ci}: {case_to_sample_distribution[ci]}')
Total Cases Present:  2743

Train: P:  3221 (0.27), N:  8504 (0.73)
Val:   P:   591 (0.29), N:  1422 (0.71)
Test:  P:  1566 (0.26), N:  4372  (0.74)

P/N Ratio: 5378:14298
P Percent: 0.27
N Percent: 0.73

Verify that no data has leaked between test sets:

In [68]:
def check_data_leakage(full_data, train_data, val_data, test_data):
    # Convert to sets for easier operations
    full_data_set = set(full_data)
    train_data_set = set(train_data)
    val_data_set = set(val_data)
    test_data_set = set(test_data)

    # Check if train, val, test are subsets of full_data
    if not train_data_set.issubset(full_data_set):
        return "Train data has leakage"
    if not val_data_set.issubset(full_data_set):
        return "Validation data has leakage"
    if not test_data_set.issubset(full_data_set):
        return "Test data has leakage"

    # Check if train, val, test are disjoint
    if train_data_set & val_data_set:
        return "Train and validation data are not disjoint"
    if train_data_set & test_data_set:
        return "Train and test data are not disjoint"
    if val_data_set & test_data_set:
        return "Validation and test data are not disjoint"

    return "No data leakage detected"

print(check_data_leakage(list(sample_cases.keys()), sample_cases_train, sample_cases_val, sample_cases_test))
No data leakage detected

Create a custom vitalDataset class derived from Dataset to be used by the data loaders:

In [69]:
# Create vitalDataset class
class vitalDataset(Dataset):
    def __init__(self, samples, normalize_abp=False):
        self.samples = samples
        self.normalize_abp = normalize_abp

    def __len__(self):
        return len(self.samples)

    def __getitem__(self, idx):
        # Get metadata for this event
        segment = self.samples[idx]

        file_path = segment[0]
        label = (segment[1] == "True" or segment[1] == "True.vital")

        (abp, ecg, eeg) = get_segment_data(file_path)

        if abp is None or eeg is None or ecg is None:
            return (np.zeros(30000), np.zeros(30000), np.zeros(7680), 0)
        
        if self.normalize_abp:
            abp -= 65
            abp /= 65

        return abp, ecg, eeg, label

NORMALIZE_ABP = False

train_dataset = vitalDataset(samples_train, NORMALIZE_ABP)
val_dataset = vitalDataset(samples_val, NORMALIZE_ABP)
test_dataset = vitalDataset(samples_test, NORMALIZE_ABP)

Train/val/test Splits Summary Statistics¶

Analyze the mean value distribution across each dataset in order to study and verify that their characteristics are in line:

In [70]:
def generate_nan_means(mydataset):
    xs = np.zeros(len(mydataset))
    ys = np.zeros(len(mydataset), dtype=int)

    for i, (abp, ecg, eeg, y) in enumerate(iter(mydataset)):
        xs[i] = np.nanmean(abp)
        ys[i] = int(y)

    return pd.DataFrame({'abp_nanmean': xs, 'label': ys})
In [71]:
def generate_nan_means_summaries(tr, va, te, group='all'):
    if group == 'all':
        return pd.DataFrame({
            'train': tr.describe()['abp_nanmean'],
            'validation': va.describe()['abp_nanmean'],
            'test': te.describe()['abp_nanmean']
        })
    
    mytr = tr.reset_index()
    myva = va.reset_index()
    myte = te.reset_index()
    
    label_flag = True if group == 'positive' else False
    
    return pd.DataFrame({
        'train':      mytr[mytr['label'] == label_flag].describe()['abp_nanmean'],
        'validation': myva[myva['label'] == label_flag].describe()['abp_nanmean'],
        'test':       myte[myte['label'] == label_flag].describe()['abp_nanmean']
    })
In [72]:
def plot_nan_means(df, plot_label):
    mydf = df.reset_index()

    maxCases = 'ALL' if MAX_CASES is None else MAX_CASES
    plot_title = f'{plot_label} - ABP nanmean Values, {PREDICTION_WINDOW} Minutes, {maxCases} Cases'
    
    ax = mydf[mydf['label'] == False].plot.scatter(
        x='index', y='abp_nanmean', color='DarkBlue', label='Negative', 
        title=plot_title, figsize=(16,9))

    negative_median = mydf[mydf['label'] == False]['abp_nanmean'].median()
    ax.axhline(y=negative_median, color='DarkBlue', linestyle='--', label='Negative Median')
    
    mydf[mydf['label'] == True].plot.scatter(
        x='index', y='abp_nanmean', color='DarkOrange', label='Positive', ax=ax);
    
    positive_median = mydf[mydf['label'] == True]['abp_nanmean'].median()
    ax.axhline(y=positive_median, color='DarkOrange', linestyle='--', label='Positive Median')
    
    ax.legend(loc='upper right')
In [73]:
def plot_nan_means_hist(df):
    df.plot.hist(column=['abp_nanmean'], by='label', bins=50, figsize=(10, 8));
In [74]:
train_abp_nanmeans = generate_nan_means(train_dataset)
val_abp_nanmeans = generate_nan_means(val_dataset)
test_abp_nanmeans = generate_nan_means(test_dataset)

ABP Nanmean Summaries¶

In [75]:
generate_nan_means_summaries(train_abp_nanmeans, val_abp_nanmeans, test_abp_nanmeans)
Out[75]:
train validation test
count 11725.000000 2013.000000 5938.000000
mean 85.342557 84.527469 85.337452
std 12.102408 11.928181 12.139388
min 65.136129 65.176681 65.178063
25% 75.843523 75.141869 75.794924
50% 83.549179 82.839065 83.643432
75% 93.382970 92.584281 92.977931
max 138.285504 131.649859 147.949437
In [76]:
generate_nan_means_summaries(train_abp_nanmeans, val_abp_nanmeans, test_abp_nanmeans, group='positive')
Out[76]:
train validation test
count 3221.000000 591.000000 1566.000000
mean 76.393673 75.830934 76.462263
std 9.120690 8.735726 9.256418
min 65.136129 65.176681 65.178063
25% 70.014729 69.719051 69.994147
50% 74.056950 73.944380 74.136967
75% 80.011449 79.447594 80.095860
max 132.143619 124.815472 136.381225
In [77]:
generate_nan_means_summaries(train_abp_nanmeans, val_abp_nanmeans, test_abp_nanmeans, group='negative')
Out[77]:
train validation test
count 8504.000000 1422.000000 4372.000000
mean 88.732062 88.141852 88.516442
std 11.341231 11.191253 11.452285
min 65.225560 66.221575 65.476802
25% 80.095975 79.646864 79.987240
50% 87.345195 86.884530 87.118146
75% 95.959147 95.516197 95.679384
max 138.285504 131.649859 147.949437

ABP Nanmean Histograms¶

In [78]:
plot_nan_means_hist(train_abp_nanmeans)
In [79]:
plot_nan_means_hist(val_abp_nanmeans)
In [80]:
plot_nan_means_hist(test_abp_nanmeans)

ABP Nanmean Scatter Plots¶

In [81]:
plot_nan_means(train_abp_nanmeans, 'Train')
In [82]:
plot_nan_means(val_abp_nanmeans, 'Validation')
In [83]:
plot_nan_means(test_abp_nanmeans, 'Test')
In [84]:
# Memory cleanup
del train_abp_nanmeans
del val_abp_nanmeans
del test_abp_nanmeans

Classification Studies¶

Check if data can be easily classified using non-deep learning methods. Create a balanced sample of IOH and non-IOH events and use a simple classifier to see if the data can be easily separated. Datasets which can be easily separated by non-deep learning methods should also be easily classified by deep learning models.

In [85]:
MAX_CLASSIFICATION_SAMPLES = 250
MAX_SAMPLE_SIZE = 1600
classification_sample_size = MAX_SAMPLE_SIZE if len(samples) >= MAX_SAMPLE_SIZE else len(samples)

classification_samples = random.sample(samples, classification_sample_size)

positive_samples = []
negative_samples = []

for sample in classification_samples:
    (sampleAbp, sampleEcg, sampleEeg) = get_segment_data(sample[0])
    
    if sample[1] == "True":
        positive_samples.append([sample[0], True, sampleAbp, sampleEcg, sampleEeg])
    else:
        negative_samples.append([sample[0], False, sampleAbp, sampleEcg, sampleEeg])

positive_samples = pd.DataFrame(positive_samples, columns=["file_path", "segment_label", "segment_abp", "segment_ecg", "segment_eeg"])
negative_samples = pd.DataFrame(negative_samples, columns=["file_path", "segment_label", "segment_abp", "segment_ecg", "segment_eeg"])

total_to_sample_pos = MAX_CLASSIFICATION_SAMPLES if len(positive_samples) >= MAX_CLASSIFICATION_SAMPLES else len(positive_samples)
total_to_sample_neg = MAX_CLASSIFICATION_SAMPLES if len(negative_samples) >= MAX_CLASSIFICATION_SAMPLES else len(negative_samples)

# Select up to 150 random samples where segment_label is True
positive_samples = positive_samples.sample(total_to_sample_pos, random_state=RANDOM_SEED)
# Select up to 150 random samples where segment_label is False
negative_samples = negative_samples.sample(total_to_sample_neg, random_state=RANDOM_SEED)

print(f'positive_samples: {len(positive_samples)}')
print(f'negative_samples: {len(negative_samples)}')

# Combine the positive and negative samples
samples_balanced = pd.concat([positive_samples, negative_samples])
positive_samples: 250
negative_samples: 250

Define function to build data for study. Each waveform field can be enabled or disabled:

In [86]:
def get_x_y(samples, use_abp, use_ecg, use_eeg):
    # Create X and y, using data from `samples_balanced` and the `use_abp`, `use_ecg`, and `use_eeg` variables
    X = []
    y = []
    for i in range(len(samples)):
        row = samples.iloc[i]
        sample = np.array([])
        if use_abp:
            if len(row['segment_abp']) != 30000:
                print(len(row['segment_abp']))
            sample = np.append(sample, row['segment_abp'])
        if use_ecg:
            if len(row['segment_ecg']) != 30000:
                print(len(row['segment_ecg']))
            sample = np.append(sample, row['segment_ecg'])
        if use_eeg:
            if len(row['segment_eeg']) != 7680:
                print(len(row['segment_eeg']))
            sample = np.append(sample, row['segment_eeg'])
        X.append(sample)
        # Convert the label from boolean to 0 or 1
        y.append(int(row['segment_label']))
    return X, y

KNN¶

Define KNN run. This is configurable to enable or disable different data channels so that we can study them individually or together:

In [87]:
N_NEIGHBORS = 20

def run_knn(samples, use_abp, use_ecg, use_eeg):
    # Get samples
    X,y = get_x_y(samples, use_abp, use_ecg, use_eeg)

    # Split samples into train and val
    knn_X_train, knn_X_test, knn_y_train, knn_y_test = train_test_split(X, y, test_size=0.2, random_state=RANDOM_SEED)

    # Normalize the data
    scaler = StandardScaler()
    scaler.fit(knn_X_train)

    knn_X_train = scaler.transform(knn_X_train)
    knn_X_test = scaler.transform(knn_X_test)

    # Initialize the KNN classifier
    knn = KNeighborsClassifier(n_neighbors=N_NEIGHBORS)

    # Train the KNN classifier
    knn.fit(knn_X_train, knn_y_train)

    # Make predictions on the test set
    knn_y_pred = knn.predict(knn_X_test)

    # Evaluate the KNN classifier
    print(f"ABP: {use_abp}, ECG: {use_ecg}, EEG: {use_eeg}")
    print(f"Confusion matrix:\n{confusion_matrix(knn_y_test, knn_y_pred)}")
    print(f"Classification report:\n{classification_report(knn_y_test, knn_y_pred)}")

Study each waveform independently, then ABP+EEG (which had best results in paper), and ABP+ECG+EEG:

In [88]:
run_knn(samples_balanced, use_abp=True, use_ecg=False, use_eeg=False)
run_knn(samples_balanced, use_abp=False, use_ecg=True, use_eeg=False)
run_knn(samples_balanced, use_abp=False, use_ecg=False, use_eeg=True)
run_knn(samples_balanced, use_abp=True, use_ecg=False, use_eeg=True)
run_knn(samples_balanced, use_abp=True, use_ecg=True, use_eeg=True)
ABP: True, ECG: False, EEG: False
Confusion matrix:
[[48  6]
 [16 30]]
Classification report:
              precision    recall  f1-score   support

           0       0.75      0.89      0.81        54
           1       0.83      0.65      0.73        46

    accuracy                           0.78       100
   macro avg       0.79      0.77      0.77       100
weighted avg       0.79      0.78      0.78       100

ABP: False, ECG: True, EEG: False
Confusion matrix:
[[32 22]
 [21 25]]
Classification report:
              precision    recall  f1-score   support

           0       0.60      0.59      0.60        54
           1       0.53      0.54      0.54        46

    accuracy                           0.57       100
   macro avg       0.57      0.57      0.57       100
weighted avg       0.57      0.57      0.57       100

ABP: False, ECG: False, EEG: True
Confusion matrix:
[[ 2 52]
 [ 0 46]]
Classification report:
              precision    recall  f1-score   support

           0       1.00      0.04      0.07        54
           1       0.47      1.00      0.64        46

    accuracy                           0.48       100
   macro avg       0.73      0.52      0.36       100
weighted avg       0.76      0.48      0.33       100

ABP: True, ECG: False, EEG: True
Confusion matrix:
[[42 12]
 [ 7 39]]
Classification report:
              precision    recall  f1-score   support

           0       0.86      0.78      0.82        54
           1       0.76      0.85      0.80        46

    accuracy                           0.81       100
   macro avg       0.81      0.81      0.81       100
weighted avg       0.81      0.81      0.81       100

ABP: True, ECG: True, EEG: True
Confusion matrix:
[[39 15]
 [ 6 40]]
Classification report:
              precision    recall  f1-score   support

           0       0.87      0.72      0.79        54
           1       0.73      0.87      0.79        46

    accuracy                           0.79       100
   macro avg       0.80      0.80      0.79       100
weighted avg       0.80      0.79      0.79       100

Based on the data above, the ABP and ABP+EEG data are somewhat predictive based on the macro average F1-score, the ECG and EEG data are weakly predictive, and ABP+ECG+EEG data somewhat less predictive than either of ABP or ABP+EEG.

Models based on ABP data alone, or ABP+EEG data are expected to train well with good performance. The other signals appear to mostly add noise and are not strongly predictive. This agrees with the results from the paper.

t-SNE¶

Define t-SNE run. This is configurable to enable or disable different data channels so that we can study them individually or together:

In [89]:
def run_tsne(samples, use_abp, use_ecg, use_eeg):
    # Get samples
    X,y = get_x_y(samples, use_abp, use_ecg, use_eeg)
    
    # Convert X and y to numpy arrays
    X = np.array(X)
    y = np.array(y)

    # Run t-SNE on the samples
    tsne = TSNE(n_components=len(np.unique(y)), random_state=RANDOM_SEED)
    X_tsne = tsne.fit_transform(X)
    
    # Create a scatter plot of the t-SNE representation
    plt.figure(figsize=(16, 9))
    plt.title(f"use_abp={use_abp}, use_ecg={use_ecg}, use_eeg={use_eeg}")
    for i, label in enumerate(set(y)):
        plt.scatter(X_tsne[y == label, 0], X_tsne[y == label, 1], label=label)
    plt.legend()
    plt.show()

Study each waveform independently, then ABP+EEG (which had best results in paper), and ABP+ECG+EEG:

In [90]:
run_tsne(samples_balanced, use_abp=True, use_ecg=False, use_eeg=False)
run_tsne(samples_balanced, use_abp=False, use_ecg=True, use_eeg=False)
run_tsne(samples_balanced, use_abp=False, use_ecg=False, use_eeg=True)
run_tsne(samples_balanced, use_abp=True, use_ecg=False, use_eeg=True)
run_tsne(samples_balanced, use_abp=True, use_ecg=True, use_eeg=True)

Based on the plots above, it appears that ABP alone, ABP+EEG and ABP+ECG+EEG are somewhat separable, though with outliers, and should be trainable by our model. The ECG and EEG data are not readily separable from the other data. This agrees with the results from the paper.

In [91]:
# Memory cleanup
del samples_balanced

Model¶

The model implementation is based on the CNN architecture described in Jo Y-Y et al. (2022). It is designed to handle 1, 2, or 3 biosignal waveforms simultaneously, allowing for flexible model configurations based on different combinations of physiological data:

  • ABP alone
  • EEG alone
  • ECG alone
  • ABP + EEG
  • ABP + ECG
  • EEG + ECG
  • ABP + EEG + ECG

Model Architecture¶

The architecture, as depicted in Figure 2 from the original paper, utilizes a ResNet-based approach tailored for time-series data from different physiological signals. The model architecture is adapted to handle varying input signal frequencies, with specific hyperparameters for each signal type, particularly EEG, due to its distinct characteristics compared to ABP and ECG. A diagram of the model architecture is shown below:

Architecture of the hypotension risk prediction model using multiple waveforms

Each input signal is processed through a sequence of 12 7-layer residual blocks, followed by a flattening process and a linear transformation to produce a 32-dimensional feature vector per signal type. These vectors are then concatenated (if multiple signals are used) and passed through two additional linear layers to produce a single output vector, representing the IOH index. A threshold is determined experimentally in order to minimize the differene between the sensitivity and specificity and is applied to this index to perform binary classification for predicting IOH events.

The hyperparameters for the residual blocks are specified in Supplemental Table 1 from the original paper and vary for different signal type.

A forward pass through the model passes through 85 layers before concatenation, followed by two more linear layers and finally a sigmoid activation layer to produce the prediction measure.

Residual Block Definition¶

Each residual block consists of the following seven layers:

  • Batch normalization
  • ReLU
  • Dropout (0.5)
  • 1D convolution
  • Batch normalization
  • ReLU
  • 1D convolution

Skip connections are included to aid in gradient flow during training, with optional 1D convolution in the skip connection to align dimensions.

Residual Block Hyperparameters¶

The hyperparameters are detailed in Supplemental Table 1 of the original paper. A screenshot of these hyperparameters is provided for reference below:

Supplemental Table 1 from original paper

Note: Please be aware of a transcription error in the original paper's Supplemental Table 1 for the ECG+ABP configuration in Residual Blocks 11 and 12, where the output size should be 469 6 instead of the reported 496 6.

In [92]:
# Define the residual block which is implemented for each biosignal path
class ResidualBlock(nn.Module):
    def __init__(self, in_features: int, out_features: int, in_channels: int, out_channels: int, kernel_size: int, stride: int = 1, size_down: bool = False, ignoreSkipConnection: bool = False) -> None:
        super(ResidualBlock, self).__init__()
        
        self.ignoreSkipConnection = ignoreSkipConnection

        # calculate the appropriate padding required to ensure expected sequence lengths out of each residual block
        padding = int((((stride-1)*in_features)-stride+kernel_size)/2)

        self.size_down = size_down
        self.bn1 = nn.BatchNorm1d(in_channels)
        self.relu = nn.ReLU()
        self.dropout = nn.Dropout(0.5)
        self.conv1 = nn.Conv1d(in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding, bias=False)
        self.bn2 = nn.BatchNorm1d(out_channels)
        self.conv2 = nn.Conv1d(out_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding, bias=False)
        
        self.residualConv = nn.Conv1d(in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding, bias=False)

        # unclear where in sequence this should take place. Size down expressed in Supplemental table S1
        if self.size_down:
            pool_padding = (1 if (in_features % 2 > 0) else 0)
            self.downsample = nn.MaxPool1d(kernel_size=2, stride=2, padding = pool_padding)

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        identity = x

        out = self.bn1(x)
        out = self.relu(out)
        out = self.dropout(out)
        out = self.conv1(out)

        if self.size_down:
            out = self.downsample(out)

        out = self.bn2(out)
        out = self.relu(out)
        out = self.conv2(out)

        if not self.ignoreSkipConnection:
          if out.shape != identity.shape:
              # run the residual through a convolution when necessary
              identity = self.residualConv(identity)

              outlen = np.prod(out.shape)
              idlen = np.prod(identity.shape)
              # downsample when required
              if idlen > outlen:
                  identity = self.downsample(identity)
              # match dimensions
              identity = identity.reshape(out.shape)

          # add the residual       
          out += identity

        return  out

# Define the parameterizable model
class HypotensionCNN(nn.Module):
    def __init__(self, useAbp: bool = True, useEeg: bool = False, useEcg: bool = False, device: str = "cpu", nResiduals: int = 12, ignoreSkipConnection: bool = False, useSigmoid: bool = True) -> None:
        assert useAbp or useEeg or useEcg, "At least one data track must be used"
        assert nResiduals > 0 and nResiduals <= 12, "Number of residual blocks must be between 1 and 12"
        super(HypotensionCNN, self).__init__()

        self.device = device

        self.useAbp = useAbp
        self.useEeg = useEeg
        self.useEcg = useEcg
        self.nResiduals = nResiduals
        self.useSigmoid = useSigmoid

        # Size of the concatenated output from the residual blocks
        concatSize = 0

        if useAbp:
          self.abpBlocks = []
          self.abpMultipliers = [1, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 6, 6]
          self.abpSizes = [30000, 15000, 15000, 7500, 7500, 3750, 3750, 1875, 1875, 938, 938, 469, 469]
          for i in range(self.nResiduals):
            downsample = i % 2 == 0
            self.abpBlocks.append(ResidualBlock(self.abpSizes[i], self.abpSizes[i+1], self.abpMultipliers[i], self.abpMultipliers[i+1], 15 if i < 6 else 7, 1, downsample, ignoreSkipConnection))
          self.abpResiduals = nn.Sequential(*self.abpBlocks)
          self.abpFc = nn.Linear(self.abpMultipliers[self.nResiduals] * self.abpSizes[self.nResiduals], 32)
          concatSize += 32
        
        if useEcg:
          self.ecgBlocks = []
          self.ecgMultipliers = [1, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 6, 6]
          self.ecgSizes = [30000, 15000, 15000, 7500, 7500, 3750, 3750, 1875, 1875, 938, 938, 469, 469]

          for i in range(self.nResiduals):
            downsample = i % 2 == 0
            self.ecgBlocks.append(ResidualBlock(self.ecgSizes[i], self.ecgSizes[i+1], self.ecgMultipliers[i], self.ecgMultipliers[i+1], 15 if i < 6 else 7, 1, downsample, ignoreSkipConnection))
          self.ecgResiduals = nn.Sequential(*self.ecgBlocks)
          self.ecgFc = nn.Linear(self.ecgMultipliers[self.nResiduals] * self.ecgSizes[self.nResiduals], 32)
          concatSize += 32

        if useEeg:
          self.eegBlocks = []
          self.eegMultipliers = [1, 2, 2, 2, 2, 2, 4, 4, 4, 4, 4, 6, 6]
          self.eegSizes = [7680, 3840, 3840, 1920, 1920, 960, 960, 480, 480, 240, 240, 120, 120]

          for i in range(self.nResiduals):
            downsample = i % 2 == 0
            self.eegBlocks.append(ResidualBlock(self.eegSizes[i], self.eegSizes[i+1], self.eegMultipliers[i], self.eegMultipliers[i+1], 7 if i < 6 else 3, 1, downsample, ignoreSkipConnection))
          self.eegResiduals = nn.Sequential(*self.eegBlocks)
          self.eegFc = nn.Linear(self.eegMultipliers[self.nResiduals] * self.eegSizes[self.nResiduals], 32)
          concatSize += 32

        # The fullLinear1 layer accepts the outputs of the concatenation of the ResidualBlocks from each biosignal path
        self.fullLinear1 = nn.Linear(concatSize, 16)
        self.fullLinear2 = nn.Linear(16, 1)
        self.sigmoid = nn.Sigmoid()


    def forward(self, abp: torch.Tensor, eeg: torch.Tensor, ecg: torch.Tensor) -> torch.Tensor:
        batchSize = len(abp)

        # conditionally operate ABP, EEG, and ECG networks
        tensors = []
        if self.useAbp:
          self.abpResiduals.to(self.device)
          abp = self.abpResiduals(abp)
          totalLen = np.prod(abp.shape)
          abp = torch.reshape(abp, (batchSize, int(totalLen / batchSize)))
          abp = self.abpFc(abp)
          tensors.append(abp)

        if self.useEeg:
          self.eegResiduals.to(self.device)
          eeg = self.eegResiduals(eeg)
          totalLen = np.prod(eeg.shape)
          eeg = torch.reshape(eeg, (batchSize, int(totalLen / batchSize)))
          eeg = self.eegFc(eeg)
          tensors.append(eeg)
        
        if self.useEcg:
          self.ecgResiduals.to(self.device)
          ecg = self.ecgResiduals(ecg)
          totalLen = np.prod(ecg.shape)
          ecg = torch.reshape(ecg, (batchSize, int(totalLen / batchSize)))
          ecg = self.ecgFc(ecg)
          tensors.append(ecg)

        # concatenate the tensors along dimension 1 if there's more than one, otherwise use the single tensor
        merged = torch.cat(tensors, dim=1) if len(tensors) > 1 else tensors[0]

        totalLen = np.prod(merged.shape)
        merged = torch.reshape(merged, (batchSize, int(totalLen / batchSize)))
        out = self.fullLinear1(merged)
        out = self.fullLinear2(out)
        # Skip the final model sigmoid when using BCEWithLogitsLoss loss function
        if self.useSigmoid:
            out = self.sigmoid(out)

        return out

Training¶

The training loop is highly parameterizable, and all aspects can be configured. The original paper uses binary cross entropy as the loss function with Adam as the optimizer, a learning rate of 0.0001, and with training configured to run for up to 100 epochs, with early stopping implemented if no improvement in loss is observed over five consecutive epochs. Our models were run with the same parameters, but longer patience values to account for the noisier and smaller dataset that we had access to.

Define a function to train the model for one epoch. Collect the losses so the mean can be reported.

In [93]:
def train_model_one_iter(model, device, loss_func, optimizer, train_loader):
    model.train()
    train_losses = []
    
    for abp, ecg, eeg, label in tqdm(train_loader):
        batch = len(abp)
        abp = abp.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        ecg = ecg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        eeg = eeg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        label = label.type(torch.float).reshape(batch, 1).to(device)

        optimizer.zero_grad()
        mdl = model(abp, eeg, ecg)
        loss = loss_func(torch.nan_to_num(mdl), label)
        loss.backward()
        optimizer.step()
        train_losses.append(loss.cpu().data.numpy())
    return np.mean(train_losses)

Evaluate the model using the the provided loss function. This is typically called on the validation dataset at each epoch:

In [94]:
def evaluate_model(model, loss_func, val_loader):
    model.eval()
    val_losses = []
    for abp, ecg, eeg, label in tqdm(val_loader):
        batch = len(abp)

        abp = abp.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        ecg = ecg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        eeg = eeg.reshape(batch, 1, -1).type(torch.FloatTensor).to(device)
        label = label.type(torch.float).reshape(batch, 1).to(device)

        mdl = model(abp, eeg, ecg)
        loss = loss_func(torch.nan_to_num(mdl), label)
        val_losses.append(loss.cpu().data.numpy())
    return np.mean(val_losses)

Define a function to plot the training and validation losses from the entire training run and indicate at which epoch the validation loss was minimized. This is typically patience epochs before the end of training:

In [95]:
def plot_losses(train_losses, val_losses, best_epoch, experimentName):
    print()
    print(f'Plot Validation and Loss Values from Training')
    print(f'  Epoch with best Validation Loss:  {best_epoch:3}, {val_losses[best_epoch]:.4}')

    # Create x-axis values for epochs
    epochs = range(0, len(train_losses))

    plt.figure(figsize=(16, 9))

    # Plot the training and validation losses
    plt.plot(epochs, train_losses, 'b', label='Training Loss')
    plt.plot(epochs, val_losses, 'r', label='Validation Loss')

    # Add a vertical bar at the best_epoch
    plt.axvline(x=best_epoch, color='g', linestyle='--', label='Best Epoch')

    # Shade everything to the right of the best_epoch a light red
    plt.axvspan(best_epoch, max(epochs), facecolor='r', alpha=0.1)

    # Add labels and title
    plt.xlabel('Epochs')
    plt.ylabel('Loss')
    plt.title(experimentName)

    # Add legend
    plt.legend(loc='upper right')

    # Save plot to disk
    plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_losses.png'))

    # Show the plot
    plt.show()

Define a function to calculate the complete performance metric profile of a model. As in the original paper, the threshold is found as the argmin of the Δ(sensitivity, specificity):

In [96]:
def eval_model(model, device, dataloader, loss_func, print_detailed: bool = False):
    model.eval()
    model = model.to(device)
    total_loss = 0
    all_predictions = []
    all_labels = []

    with torch.no_grad():
        for abp, ecg, eeg, label in tqdm(dataloader):
            batch = len(abp)
    
            abp = torch.nan_to_num(abp.reshape(batch, 1, -1)).type(torch.FloatTensor).to(device)
            ecg = torch.nan_to_num(ecg.reshape(batch, 1, -1)).type(torch.FloatTensor).to(device)
            eeg = torch.nan_to_num(eeg.reshape(batch, 1, -1)).type(torch.FloatTensor).to(device)
            label = label.type(torch.float).reshape(batch, 1).to(device)
   
            pred = model(abp, eeg, ecg)
            loss = loss_func(pred, label)
            total_loss += loss.item()

            all_predictions.append(pred.detach().cpu().numpy())
            all_labels.append(label.detach().cpu().numpy())

    # Flatten the lists
    all_predictions = np.concatenate(all_predictions).flatten()
    all_labels = np.concatenate(all_labels).flatten()

    # Calculate AUROC and AUPRC
    # y_true, y_pred
    auroc = roc_auc_score(all_labels, all_predictions)
    precision, recall, _ = precision_recall_curve(all_labels, all_predictions)
    auprc = auc(recall, precision)

    # Determine the optimal threshold, which is argmin(abs(sensitivity - specificity)) per the paper
    thresholds = np.linspace(0, 1, 101) # 0 to 1 in 0.01 steps
    min_diff = float('inf')
    optimal_sensitivity = None
    optimal_specificity = None
    optimal_threshold = None

    for threshold in thresholds:
        all_predictions_binary = (all_predictions > threshold).astype(int)

        tn, fp, fn, tp = confusion_matrix(all_labels, all_predictions_binary).ravel()
        sensitivity = tp / (tp + fn)
        specificity = tn / (tn + fp)
        diff = abs(sensitivity - specificity)

        if diff < min_diff:
            min_diff = diff
            optimal_threshold = threshold
            optimal_sensitivity = sensitivity
            optimal_specificity = specificity

    avg_loss = total_loss / len(dataloader)
    
    # accuracy
    predictions_binary = (all_predictions > optimal_threshold).astype(int)
    accuracy = np.mean(predictions_binary == all_labels)

    if print_detailed:
        print(f"Predictions: {all_predictions}")
        print(f"Labels: {all_labels}")
    print(f"Loss: {avg_loss}")
    print(f"AUROC: {auroc}")
    print(f"AUPRC: {auprc}")
    print(f"Sensitivity: {optimal_sensitivity}")
    print(f"Specificity: {optimal_specificity}")
    print(f"Threshold: {optimal_threshold}")
    print(f"Accuracy:  {accuracy}")

    return all_predictions, all_labels, avg_loss, auroc, auprc, \
        optimal_sensitivity, optimal_specificity, optimal_threshold, accuracy

Define a function to calculate and print the AUROC and AURPC values for each epoch of a training run:

In [97]:
def print_all_evals(model, models, device, val_loader, test_loader, loss_func, print_detailed: bool = False):
    print()
    print(f'Generate AUROC/AUPRC for Each Intermediate Model')
    print()
    val_aurocs = []
    val_auprcs = []
    val_accs   = []

    test_aurocs = []
    test_auprcs = []
    test_accs   = []

    for mod in models:
        model.load_state_dict(torch.load(mod))
        #model.train(False)
        model.eval()
        print(f'Intermediate Model:')
        print(f'  {mod}')
    
        # validation loop
        print("AUROC/AUPRC on Validation Data")
        all_predictions, all_labels, avg_loss, valid_auroc, valid_auprc, \
        optimal_sensitivity, optimal_specificity, optimal_threshold, valid_accuracy = \
            eval_model(model, device, val_loader, loss_func, print_detailed)

        val_aurocs.append(valid_auroc)
        val_auprcs.append(valid_auprc)
        val_accs.append(valid_accuracy)
        print()
    
        # test loop
        print("AUROC/AUPRC on Test Data")
        all_predictions, all_labels, avg_loss, test_auroc, test_auprc, \
        optimal_sensitivity, optimal_specificity, optimal_threshold, test_accuracy = \
            eval_model(model, device, test_loader, loss_func, print_detailed)

        test_aurocs.append(test_auroc)
        test_auprcs.append(test_auprc)
        test_accs.append(test_accuracy)
        print()
    
    return val_aurocs, val_auprcs, val_accs, test_aurocs, test_auprcs, test_accs

Define a function to plot the AUROC, AUPRC and accuracy at each epoch and print the parameters for the best epoch on validation loss, AUROC and accuracy:

In [98]:
def plot_auroc_auprc(val_losses, val_aurocs, val_auprcs, val_accs, 
                                      test_aurocs, test_auprcs, test_accs, all_models, best_epoch, experimentName):
    print()
    print(f'Plot AUROC/AUPRC for Each Intermediate Model')
    
    # Create x-axis values for epochs
    epochs = range(0, len(val_aurocs))

    # Find model with highest AUROC
    np_test_aurocs = np.array(test_aurocs)
    test_auroc_idx = np.argmax(np_test_aurocs)
    test_accs_idx  = np.argmax(test_accs)

    print(f'  Epoch with best Validation Loss:     {best_epoch:3}, {val_losses[best_epoch]:.4}')
    print(f'  Epoch with best model Test AUROC:    {test_auroc_idx:3}, {np_test_aurocs[test_auroc_idx]:.4}')
    print(f'  Epoch with best model Test Accuracy: {test_accs_idx:3}, {test_accs[test_accs_idx]:.4}')
    print()

    plt.figure(figsize=(16, 9))

    # Plots
    plt.plot(epochs, val_aurocs, 'C0', label='AUROC - Validation')
    plt.plot(epochs, test_aurocs, 'C1', label='AUROC - Test')

    plt.plot(epochs, val_auprcs, 'C2', label='AUPRC - Validation')
    plt.plot(epochs, test_auprcs, 'C3', label='AUPRC - Test')
    
    plt.plot(epochs, val_accs, 'C4', label='Accuracy - Validation')
    plt.plot(epochs, test_accs, 'C5', label='Accuracy - Test')

    # Add vertical bars
    plt.axvline(x=best_epoch, color='g', linestyle='--', label='Best Epoch - Validation Loss')
    plt.axvline(x=test_auroc_idx, color='maroon', linestyle='--', label='Best Epoch - Test AUROC')
    plt.axvline(x=test_accs_idx, color='violet', linestyle='--', label='Best Epoch - Test Accuracy')

    # Shade everything to the right of the best_model a light red
    plt.axvspan(test_auroc_idx, max(epochs), facecolor='r', alpha=0.1)

    # Add labels and title
    plt.xlabel('Epochs')
    plt.ylabel('AUROC / AUPRC')
    plt.title('Validation and Test AUROC and AUPRC by Model Iteration Across Training')

    # Add legend
    plt.legend(loc='right')

    # Save plot to disk
    plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_all_stats.png'))
    
    # Show the plot
    plt.show()

    return np_test_aurocs, test_auroc_idx

Define a function to make predictions on a given:

In [99]:
# applies the model to a given real case to generate predictions
def predictionsForModel(case_id_to_check, my_model, my_model_state, device):
    (abp, ecg, eeg, event) = get_track_data(case_id_to_check)
    
    opstart = cases.loc[case_id_to_check]['opstart'].item()
    opend = cases.loc[case_id_to_check]['opend'].item()

    abp = abp[opstart*500:opend*500]
    ecg = ecg[opstart*500:opend*500]
    eeg = eeg[opstart*128:opend*128]
    
    # number of one minute segments in each track
    splits_abp = abp.shape[0] // (60 * 500)
    splits_ecg = ecg.shape[0] // (60 * 500)
    splits_eeg = eeg.shape[0] // (60 * 128)
    
    # predict as long as each track has data in the prediction window
    splits = np.min([splits_abp, splits_ecg, splits_eeg])
    
    preds = []
    
    my_model.load_state_dict(torch.load(my_model_state))
    my_model.eval()
    my_model = my_model.to(device)
    
    for i in range(splits):
        t_abp = abp[i*60*500:(i + 1)*60*500]
        t_ecg = ecg[i*60*500:(i + 1)*60*500]
        t_eeg = eeg[i*60*128:(i + 1)*60*128]
    
        if len(t_abp) < 30000:
            t_abp = np.resize(t_abp, (30000))
            
        if len(t_ecg) < 30000:
            t_ecg = np.resize(t_ecg, (30000))
            
        if len(t_eeg) < 7680:
            t_eeg = np.resize(t_eeg, (7680))
            
        t_abp = torch.from_numpy(t_abp)
        t_ecg = torch.from_numpy(t_ecg)
        t_eeg = torch.from_numpy(t_eeg)
        
        t_abp = torch.nan_to_num(t_abp.reshape(1, 1, -1)).type(torch.FloatTensor).to(device)
        t_ecg = torch.nan_to_num(t_ecg.reshape(1, 1, -1)).type(torch.FloatTensor).to(device)
        t_eeg = torch.nan_to_num(t_eeg.reshape(1, 1, -1)).type(torch.FloatTensor).to(device)

        pred = my_model(t_abp, t_eeg, t_ecg)
        preds.append(pred.detach().cpu().numpy())
    
    return np.concatenate(preds).flatten()

Define a function to plot the mean ABP and predictions for a case:

In [100]:
def printModelPrediction(case_id_to_check, preds, experimentName):  
    (abp, ecg, eeg, event) = get_track_data(case_id_to_check)
    
    opstart = cases.loc[case_id_to_check]['opstart'].item()
    opend = cases.loc[case_id_to_check]['opend'].item()
    minutes = (opend - opstart) / 60
    
    plt.figure(figsize=(24, 8))
    plt.margins(0)
    plt.title(f'ABP - Mean Arterial Pressure - Case: {case_id_to_check} - Operating Time: {minutes} minutes')
    plt.axhline(y = 65, color = 'maroon', linestyle = '--')
    
    opstart = opstart * 500
    opend = opend * 500
    
    minute_step = 5
    
    abp_mov_avg = moving_average(abp[opstart:(opend + 60*500)])
    myx = np.arange(opstart, opstart + len(abp_mov_avg), 1)
    plt.plot(myx, abp_mov_avg, 'purple')
    x_ticks = np.arange(opstart, opend, step=minute_step*30000)
    x_labels = [str(i*minute_step) for i in range(len(x_ticks))]
    plt.xticks(x_ticks, labels=x_labels)
    plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_{case_id_to_check:04d}_surgery_map.png'))
    plt.show()
    
    plt.figure(figsize=(24, 8))
    plt.margins(0)
    plt.title(f'Model Predictions for One Minute Intervals Using {PREDICTION_WINDOW} Minute Prediction Window')
    plt.plot(preds)
    x_ticks = np.arange(0, len(preds), step=minute_step)
    x_labels = [str(i*minute_step) for i in range(len(x_ticks))]
    plt.xticks(x_ticks, labels=x_labels)
    plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_{case_id_to_check:04d}_surgery_predictions.png'))
    plt.show()
    
    return preds

Define a function to run an experiment, which includes training a model and evaluating it.

In [101]:
def run_experiment(
    experimentNamePrefix: str = None,
    useAbp: bool = True, 
    useEeg: bool = False, 
    useEcg: bool = False, 
    nResiduals: int = 12, 
    skip_connection: bool = False, 
    batch_size: int = 64, 
    learning_rate: float = 1e-4, 
    weight_decay: float = 0.0, 
    pos_weight: float = None,
    max_epochs: int = 100, 
    patience: int = 25, 
    device: str = "cpu"
):
    reset_random_state()

    time_start = timer()

    experimentName = ""

    experimentOptions = [experimentNamePrefix, 'ABP', 'EEG', 'ECG', 'SKIPCONNECTION']
    experimentValues = [experimentNamePrefix is not None, useAbp, useEeg, useEcg, skip_connection]
    experimentFlags = [name for name, value in zip(experimentOptions, experimentValues) if value]
    if experimentFlags:
        experimentName = "_".join(experimentFlags)

    experimentName = f"{experimentName}_{nResiduals}_RESIDUAL_BLOCKS_{batch_size}_BATCH_SIZE_{learning_rate:.0e}_LEARNING_RATE"

    if weight_decay is not None and weight_decay != 0.0:
        experimentName = f"{experimentName}_{weight_decay:.0e}_WEIGHT_DECAY"

    predictionWindow = 'ALL' if PREDICTION_WINDOW == 'ALL' else f'{PREDICTION_WINDOW:03}'
    experimentName = f"{experimentName}_{predictionWindow}_MINS"

    maxCases = '_ALL' if MAX_CASES is None else f'{MAX_CASES:04}'
    experimentName = f"{experimentName}_{maxCases}_MAX_CASES"
    
    # Add unique uuid8 suffix to experiment name
    experimentName = f"{experimentName}_{uuid.uuid4().hex[:8]}"

    # Fork stdout to file and console
    with ForkedStdout(os.path.join(VITAL_RUNS, f'{experimentName}.log')):
        print(f"Experiment Setup")
        print(f'  name:              {experimentName}')
        print(f'  prediction_window: {predictionWindow}')
        print(f'  max_cases:         {maxCases}')
        print(f'  use_abp:           {useAbp}')
        print(f'  use_eeg:           {useEeg}')
        print(f'  use_ecg:           {useEcg}')
        print(f'  n_residuals:       {nResiduals}')
        print(f'  skip_connection:   {skip_connection}')
        print(f'  batch_size:        {batch_size}')
        print(f'  learning_rate:     {learning_rate}')
        print(f'  weight_decay:      {weight_decay}')
        if pos_weight is not None:
            print(f'  pos_weight:        {pos_weight}')
        print(f'  max_epochs:        {max_epochs}')
        print(f'  patience:          {patience}')
        print(f'  device:            {device}')
        print()

        train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
        val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
        test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)

        # Disable final sigmoid activation for BCEWithLogitsLoss
        model = HypotensionCNN(useAbp, useEeg, useEcg, device, nResiduals, skip_connection, useSigmoid=(pos_weight is None))
        model = model.to(device)
    
        if pos_weight is not None:
            # Apply weights to positive class
            loss_func = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([pos_weight]).to(device))
        else:
            loss_func = nn.BCELoss()
        optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay)

    
        print(f'Model Architecture')
        print(model)
        print()

        print(f'Training Loop')
        # Training loop
        best_epoch = 0
        train_losses = []
        val_losses = []
        best_loss = float('inf')
        no_improve_epochs = 0
        model_path = os.path.join(VITAL_MODELS, f"{experimentName}.model")

        all_models = []

        for i in range(max_epochs):
            # Train the model and get the training loss
            train_loss = train_model_one_iter(model, device, loss_func, optimizer, train_loader)
            train_losses.append(train_loss)
            # Calculate validate loss
            val_loss = evaluate_model(model, loss_func, val_loader)
            val_losses.append(val_loss)
            print(f"[{datetime.now()}] Completed epoch {i} with training loss {train_loss:.8f}, validation loss {val_loss:.8f}")

            # Save all intermediary models.
            tmp_model_path = os.path.join(VITAL_MODELS, f"{experimentName}_{i:04d}.model")
            torch.save(model.state_dict(), tmp_model_path)
            all_models.append(tmp_model_path)
  
            # Check if validation loss has improved
            if val_loss < best_loss:
                best_epoch = i
                best_loss = val_loss
                no_improve_epochs = 0
                torch.save(model.state_dict(), model_path)
                print(f"Validation loss improved to {val_loss:.8f}. Model saved.")
            else:
                no_improve_epochs += 1
                print(f"No improvement in validation loss. {no_improve_epochs} epochs without improvement.")

            # exit early if no improvement in loss over last 'patience' epochs
            if no_improve_epochs >= patience:
                print("Early stopping due to no improvement in validation loss.")
                break

        # Load best model from disk
        #print()
        #if os.path.exists(model_path):
        #    model.load_state_dict(torch.load(model_path))
        #    print(f"Loaded best model from disk from epoch {best_epoch}.")
        #else:
        #    print("No saved model found for f{experimentName}.")

        #model.train(False)

        # Plot the training and validation losses across all training epochs.
        plot_losses(train_losses, val_losses, best_epoch, experimentName)

        # Generate AUROC/AUPRC for each intermediate model generated across training epochs.
        val_aurocs, val_auprcs, val_accs, test_aurocs, test_auprcs, test_accs = \
            print_all_evals(model, all_models, device, val_loader, test_loader, loss_func, print_detailed=False)

        # Find model with highest AUROC. Plot AUROC/AUPRC across all epochs.
        np_test_aurocs, test_auroc_idx = plot_auroc_auprc(val_losses, val_aurocs, val_auprcs, val_accs, \
                                        test_aurocs, test_auprcs, test_accs, all_models, best_epoch, experimentName)

        ## AUROC / AUPRC - Model with Best Validation Loss
        best_model_val_loss = all_models[best_epoch]
    
        print(f'AUROC/AUPRC Plots - Best Model Based on Validation Loss')
        print(f'  Epoch with best Validation Loss:  {best_epoch:3}, {val_losses[best_epoch]:.4}')
        print(f'  Best Model Based on Validation Loss:')
        print(f'    {best_model_val_loss}')
        print()
        print(f'Generate Stats Based on Test Data')
        model.load_state_dict(torch.load(best_model_val_loss))
        #model.train(False)
        model.eval()
    
        best_model_val_test_predictions, best_model_val_test_labels, test_loss, \
            best_model_val_test_auroc, best_model_val_test_auprc, test_sensitivity, test_specificity, \
            best_model_val_test_threshold, best_model_val_accuracy = \
                eval_model(model, device, test_loader, loss_func, print_detailed=False)

        # y_test, y_pred
        display = RocCurveDisplay.from_predictions(
            best_model_val_test_labels,
            best_model_val_test_predictions,
            plot_chance_level=True
        )
        # Save plot to disk and show
        plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_val_auroc.png'))
        plt.show()

        print(f'best_model_val_test_auroc: {best_model_val_test_auroc}')

        # Save best model in its entirety
        torch.save(model, os.path.join(VITAL_MODELS, f'{experimentName}_full.model'))

        best_model_val_test_predictions_binary = \
        (best_model_val_test_predictions > best_model_val_test_threshold).astype(int)

        # y_test, y_pred
        display = PrecisionRecallDisplay.from_predictions(
            best_model_val_test_labels, 
            best_model_val_test_predictions_binary,
            plot_chance_level=True
        )
        # Save plot to disk and show
        plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_val_auprc.png'))
        plt.show()

        print(f'best_model_val_test_auprc: {best_model_val_test_auprc}')
        print()

        ## AUROC / AUPRC - Model with Best AUROC
        # Find model with highest AUROC
        best_model_auroc = all_models[test_auroc_idx]

        print(f'AUROC/AUPRC Plots - Best Model Based on Model AUROC')
        print(f'  Epoch with best model Test AUROC: {test_auroc_idx:3}, {np_test_aurocs[test_auroc_idx]:.4}')
        print(f'  Best Model Based on Model AUROC:')
        print(f'    {best_model_auroc}')
        print()
        print(f'Generate Stats Based on Test Data')
        model.load_state_dict(torch.load(best_model_auroc))
        #model.train(False)
        model.eval()
    
        best_model_auroc_test_predictions, best_model_auroc_test_labels, test_loss, \
            best_model_auroc_test_auroc, best_model_auroc_test_auprc, test_sensitivity, test_specificity, \
            best_model_auroc_test_threshold, best_model_auroc_accuracy = \
                eval_model(model, device, test_loader, loss_func, print_detailed=False)

        # y_test, y_pred
        display = RocCurveDisplay.from_predictions(
            best_model_auroc_test_labels,
            best_model_auroc_test_predictions,
            plot_chance_level=True
        )
        # Save plot to disk and show
        plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_auroc_auroc.png'))
        plt.show()

        print(f'best_model_auroc_test_auroc: {best_model_auroc_test_auroc}')

        best_model_auroc_test_predictions_binary = \
            (best_model_auroc_test_predictions > best_model_auroc_test_threshold).astype(int)

        # y_test, y_pred
        display = PrecisionRecallDisplay.from_predictions(
            best_model_auroc_test_labels, 
            best_model_auroc_test_predictions_binary,
            plot_chance_level=True
        )
        # Save plot to disk and show
        plt.savefig(os.path.join(VITAL_RUNS, f'{experimentName}_auroc_auprc.png'))
        plt.show()

        print(f"best_model_auroc_test_auprc: {best_model_auroc_test_auprc}")
        print()
        
        time_delta = np.round(timer() - time_start, 3)
        print(f'Total Processing Time: {time_delta:.4f} sec')
        
    return (model, best_model_val_loss, best_model_auroc, experimentName)

Experiments¶

In [102]:
# When false, run only the first experiment below and then stop
SWEEP_ALL = True

Data tracks¶

Run experiments across the biosignal data track combinations:

  • ABP
  • ECG
  • EEG
  • ABP+ECG
  • ABP+EEG
  • ECG+EEG
  • ABP+ECG+EEG

The first experiment acts as a baseline.

In [103]:
ENABLE_EXPERIMENT = True
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=False

MAX_EPOCHS=200
PATIENCE=20

data_tracks = [
    # useAbp, useEeg, useEcg, experiement enable
    [True, False, False, True], # ABP only
    [False, False, True, False], # ECG only
    [False, True, False, False], # EEG only
    [True, False, True, True], # ABP + ECG
    [True, True, False, True], # ABP + EEG
    [False, True, True, False], # ECG + EEG
    [True, True, True, True] # ABP + ECG + EEG
]

if ENABLE_EXPERIMENT:
    for (useAbp, useEeg, useEcg, enable) in data_tracks:
        if enable:
            (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
                experimentNamePrefix=None, 
                useAbp=useAbp, 
                useEeg=useEeg, 
                useEcg=useEcg,
                nResiduals=12, 
                skip_connection=False,
                batch_size=128,
                learning_rate=1e-4,
                weight_decay=1e-1,
                pos_weight=None,
                max_epochs=MAX_EPOCHS,
                patience=PATIENCE,
                device=device
            )

            if DISPLAY_MODEL_PREDICTION:
                for case_id_to_check in my_cases_of_interest_idx:
                    preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                    printModelPrediction(case_id_to_check, preds, experimentName)

                    if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                        break
Experiment Setup
  name:              ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           True
  use_eeg:           False
  use_ecg:           False
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        200
  patience:          20
  device:            mps

Model Architecture
HypotensionCNN(
  (abpResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (abpFc): Linear(in_features=2814, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=32, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 13:07:46.245516] Completed epoch 0 with training loss 0.50478780, validation loss 0.59283733
Validation loss improved to 0.59283733. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:08:36.458741] Completed epoch 1 with training loss 0.44203544, validation loss 0.60220039
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.06it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:09:27.160610] Completed epoch 2 with training loss 0.43507746, validation loss 0.59311938
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:10:17.347167] Completed epoch 3 with training loss 0.43294305, validation loss 0.58648801
Validation loss improved to 0.58648801. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:11:07.384581] Completed epoch 4 with training loss 0.43270704, validation loss 0.60147583
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.08it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:11:57.677526] Completed epoch 5 with training loss 0.43106821, validation loss 0.59025836
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 13:12:47.575146] Completed epoch 6 with training loss 0.43024591, validation loss 0.56092763
Validation loss improved to 0.56092763. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
[2024-05-05 13:13:37.600462] Completed epoch 7 with training loss 0.43008435, validation loss 0.59122258
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 13:14:27.403100] Completed epoch 8 with training loss 0.43055680, validation loss 0.57370448
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.11it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:15:17.217663] Completed epoch 9 with training loss 0.43016753, validation loss 0.56177950
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.08it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:16:07.586795] Completed epoch 10 with training loss 0.43023956, validation loss 0.60044050
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:16:57.769512] Completed epoch 11 with training loss 0.42765778, validation loss 0.57293367
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.08it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:17:48.077428] Completed epoch 12 with training loss 0.42873660, validation loss 0.55371809
Validation loss improved to 0.55371809. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:18:38.239382] Completed epoch 13 with training loss 0.42835078, validation loss 0.56576401
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:19:28.356909] Completed epoch 14 with training loss 0.42863306, validation loss 0.56444657
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.08it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:20:18.634219] Completed epoch 15 with training loss 0.42691931, validation loss 0.58557701
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:21:08.883981] Completed epoch 16 with training loss 0.42684737, validation loss 0.54507637
Validation loss improved to 0.54507637. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.08it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:21:59.217060] Completed epoch 17 with training loss 0.42635703, validation loss 0.53839862
Validation loss improved to 0.53839862. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:22:49.455579] Completed epoch 18 with training loss 0.42482322, validation loss 0.63387161
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.01it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-05 13:23:41.456647] Completed epoch 19 with training loss 0.42596799, validation loss 0.61437643
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.05it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
[2024-05-05 13:24:32.648356] Completed epoch 20 with training loss 0.42495093, validation loss 0.55088449
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.02it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-05 13:25:24.404780] Completed epoch 21 with training loss 0.42591697, validation loss 0.52314341
Validation loss improved to 0.52314341. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
[2024-05-05 13:26:15.711453] Completed epoch 22 with training loss 0.42269731, validation loss 0.56138211
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.05it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
[2024-05-05 13:27:06.979104] Completed epoch 23 with training loss 0.42490670, validation loss 0.56841862
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-05 13:27:58.374077] Completed epoch 24 with training loss 0.42309231, validation loss 0.51616138
Validation loss improved to 0.51616138. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.04it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 13:28:49.935051] Completed epoch 25 with training loss 0.42370871, validation loss 0.54135537
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.96it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 13:29:43.295548] Completed epoch 26 with training loss 0.42379835, validation loss 0.49853456
Validation loss improved to 0.49853456. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.03it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
[2024-05-05 13:30:34.895914] Completed epoch 27 with training loss 0.42449281, validation loss 0.55806577
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:31:24.929706] Completed epoch 28 with training loss 0.42629436, validation loss 0.55938870
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.07it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:32:15.555212] Completed epoch 29 with training loss 0.42332163, validation loss 0.47975424
Validation loss improved to 0.47975424. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:33:05.708076] Completed epoch 30 with training loss 0.42302474, validation loss 0.55218160
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.08it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:33:56.005594] Completed epoch 31 with training loss 0.42343977, validation loss 0.54578781
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 13:34:46.139899] Completed epoch 32 with training loss 0.42324361, validation loss 0.53106177
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
[2024-05-05 13:35:36.206259] Completed epoch 33 with training loss 0.42391533, validation loss 0.52156615
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
[2024-05-05 13:36:26.157992] Completed epoch 34 with training loss 0.42254275, validation loss 0.52490753
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:37:16.292019] Completed epoch 35 with training loss 0.42170438, validation loss 0.53385854
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.97it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 13:38:09.260617] Completed epoch 36 with training loss 0.42147306, validation loss 0.52325726
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.06it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
[2024-05-05 13:39:00.227052] Completed epoch 37 with training loss 0.42439273, validation loss 0.56559336
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.06it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:39:51.072855] Completed epoch 38 with training loss 0.42334756, validation loss 0.51361662
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.08it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
[2024-05-05 13:40:41.453817] Completed epoch 39 with training loss 0.42096871, validation loss 0.48779669
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
[2024-05-05 13:41:31.653309] Completed epoch 40 with training loss 0.42214933, validation loss 0.49129212
No improvement in validation loss. 11 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 13:42:21.444946] Completed epoch 41 with training loss 0.42325592, validation loss 0.49803615
No improvement in validation loss. 12 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.11it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 13:43:11.168137] Completed epoch 42 with training loss 0.42110091, validation loss 0.53317016
No improvement in validation loss. 13 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 13:44:00.986854] Completed epoch 43 with training loss 0.42316052, validation loss 0.49420094
No improvement in validation loss. 14 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:44:50.820223] Completed epoch 44 with training loss 0.42125160, validation loss 0.47970644
Validation loss improved to 0.47970644. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
[2024-05-05 13:45:41.059561] Completed epoch 45 with training loss 0.42109659, validation loss 0.51293129
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 13:46:31.360539] Completed epoch 46 with training loss 0.42139509, validation loss 0.48969537
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.08it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
[2024-05-05 13:47:21.736429] Completed epoch 47 with training loss 0.42109689, validation loss 0.46799773
Validation loss improved to 0.46799773. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 13:48:11.923076] Completed epoch 48 with training loss 0.42055705, validation loss 0.49802470
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:49:02.025597] Completed epoch 49 with training loss 0.42001325, validation loss 0.46762788
Validation loss improved to 0.46762788. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:49:52.218434] Completed epoch 50 with training loss 0.42103925, validation loss 0.47107077
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:50:42.378661] Completed epoch 51 with training loss 0.42315450, validation loss 0.44970593
Validation loss improved to 0.44970593. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
[2024-05-05 13:51:32.634361] Completed epoch 52 with training loss 0.42088681, validation loss 0.45432588
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 13:52:22.711354] Completed epoch 53 with training loss 0.42215550, validation loss 0.46662116
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.05it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.50it/s]
[2024-05-05 13:53:14.038989] Completed epoch 54 with training loss 0.42064986, validation loss 0.54910094
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.02it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-05 13:54:05.857588] Completed epoch 55 with training loss 0.42115963, validation loss 0.51014912
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.05it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
[2024-05-05 13:54:56.954988] Completed epoch 56 with training loss 0.42040938, validation loss 0.46705377
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:55:49.474488] Completed epoch 57 with training loss 0.42185584, validation loss 0.48147869
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:56:39.595975] Completed epoch 58 with training loss 0.41950279, validation loss 0.50390363
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
[2024-05-05 13:57:29.713442] Completed epoch 59 with training loss 0.42065302, validation loss 0.47124892
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.07it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 13:58:20.245416] Completed epoch 60 with training loss 0.42140657, validation loss 0.50581634
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 13:59:10.365929] Completed epoch 61 with training loss 0.42257729, validation loss 0.50418067
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 14:00:00.568960] Completed epoch 62 with training loss 0.42081514, validation loss 0.46422094
No improvement in validation loss. 11 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 14:00:50.644411] Completed epoch 63 with training loss 0.42085835, validation loss 0.50240731
No improvement in validation loss. 12 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
[2024-05-05 14:01:40.866689] Completed epoch 64 with training loss 0.41951951, validation loss 0.47880191
No improvement in validation loss. 13 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
[2024-05-05 14:02:31.076152] Completed epoch 65 with training loss 0.42047775, validation loss 0.49168843
No improvement in validation loss. 14 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 14:03:21.217361] Completed epoch 66 with training loss 0.41961512, validation loss 0.45698017
No improvement in validation loss. 15 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 14:04:11.422912] Completed epoch 67 with training loss 0.42221799, validation loss 0.57897615
No improvement in validation loss. 16 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.10it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 14:05:01.445899] Completed epoch 68 with training loss 0.41953880, validation loss 0.46659949
No improvement in validation loss. 17 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:44<00:00,  2.08it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 14:05:51.705962] Completed epoch 69 with training loss 0.42102665, validation loss 0.46295965
No improvement in validation loss. 18 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 14:06:41.795129] Completed epoch 70 with training loss 0.42059347, validation loss 0.46602616
No improvement in validation loss. 19 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:43<00:00,  2.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
[2024-05-05 14:07:31.961477] Completed epoch 71 with training loss 0.42102921, validation loss 0.45085296
No improvement in validation loss. 20 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:   51, 0.4497
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.91it/s]
Loss: 0.5912798028439283
AUROC: 0.8407952384692087
AUPRC: 0.686450393193818
Sensitivity: 0.7884940778341794
Specificity: 0.7489451476793249
Threshold: 0.14
Accuracy:  0.7605563835072032

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.57it/s]
Loss: 0.5530640099276888
AUROC: 0.8260300951486238
AUPRC: 0.6628346096068296
Sensitivity: 0.7432950191570882
Specificity: 0.7644098810612991
Threshold: 0.14
Accuracy:  0.7588413607275177

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.6042426750063896
AUROC: 0.8418637747173376
AUPRC: 0.6974501043658691
Sensitivity: 0.7445008460236887
Specificity: 0.7862165963431786
Threshold: 0.13
Accuracy:  0.7739692001987084

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5622487556427083
AUROC: 0.8305335298702179
AUPRC: 0.6777043947342318
Sensitivity: 0.7662835249042146
Specificity: 0.7477127172918573
Threshold: 0.12
Accuracy:  0.7526103065005052

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5927439518272877
AUROC: 0.8419429035152225
AUPRC: 0.7014516351611757
Sensitivity: 0.7715736040609137
Specificity: 0.7686357243319268
Threshold: 0.13
Accuracy:  0.76949826130154

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.5519683995145432
AUROC: 0.8316594980948073
AUPRC: 0.6808465082843131
Sensitivity: 0.7401021711366539
Specificity: 0.7753888380603843
Threshold: 0.13
Accuracy:  0.7660828561805322

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5853556990623474
AUROC: 0.8422445448725729
AUPRC: 0.7030180109352344
Sensitivity: 0.7495769881556683
Specificity: 0.7834036568213784
Threshold: 0.14
Accuracy:  0.7734724292101341

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5450610397978032
AUROC: 0.8326028926677252
AUPRC: 0.6834039059801187
Sensitivity: 0.7598978288633461
Specificity: 0.752516010978957
Threshold: 0.13
Accuracy:  0.754462782081509

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.6041245013475418
AUROC: 0.8415068026968047
AUPRC: 0.7046265708668141
Sensitivity: 0.7563451776649747
Specificity: 0.7770745428973277
Threshold: 0.12
Accuracy:  0.7709885742672627

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.5598845900373256
AUROC: 0.8323508679989579
AUPRC: 0.6839447273356857
Sensitivity: 0.7694763729246488
Specificity: 0.7408508691674291
Threshold: 0.11
Accuracy:  0.7484001347254968

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5877102967351675
AUROC: 0.8415924759817325
AUPRC: 0.7051010696746705
Sensitivity: 0.7529610829103215
Specificity: 0.7770745428973277
Threshold: 0.13
Accuracy:  0.7699950322901142

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5481542183997783
AUROC: 0.8326773827176073
AUPRC: 0.6842188001891739
Sensitivity: 0.768837803320562
Specificity: 0.7422232387923148
Threshold: 0.12
Accuracy:  0.7492421690804985

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5605003647506237
AUROC: 0.8408987603551633
AUPRC: 0.7032739504366273
Sensitivity: 0.7478849407783418
Specificity: 0.7812939521800282
Threshold: 0.15
Accuracy:  0.771485345255837

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5247014767311989
AUROC: 0.8321504751588829
AUPRC: 0.6834770032287723
Sensitivity: 0.7618135376756067
Specificity: 0.755946935041171
Threshold: 0.14
Accuracy:  0.757494105759515

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
Loss: 0.58964454382658
AUROC: 0.8402585905316742
AUPRC: 0.7034731074177135
Sensitivity: 0.7817258883248731
Specificity: 0.7482419127988749
Threshold: 0.12
Accuracy:  0.7580725285643318

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5489743608743587
AUROC: 0.8319167808847432
AUPRC: 0.6841223744418166
Sensitivity: 0.7573435504469987
Specificity: 0.7593778591033852
Threshold: 0.12
Accuracy:  0.7588413607275177

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5733453966677189
AUROC: 0.8403514032570127
AUPRC: 0.7038697194018042
Sensitivity: 0.7614213197969543
Specificity: 0.770745428973277
Threshold: 0.14
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5336323951153045
AUROC: 0.8320632049533838
AUPRC: 0.6840718109865653
Sensitivity: 0.7701149425287356
Specificity: 0.7387923147301007
Threshold: 0.13
Accuracy:  0.7470528797574941

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0009.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5598537959158421
AUROC: 0.8401336503244875
AUPRC: 0.702412161552822
Sensitivity: 0.7732656514382402
Specificity: 0.7524613220815752
Threshold: 0.14
Accuracy:  0.7585692995529061

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.5247557058613351
AUROC: 0.8318157081111778
AUPRC: 0.683495508417993
Sensitivity: 0.7515964240102171
Specificity: 0.7639524245196706
Threshold: 0.14
Accuracy:  0.7606938363085214

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0010.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.6007058192044497
AUROC: 0.8395291777030516
AUPRC: 0.7043379729992882
Sensitivity: 0.7783417935702199
Specificity: 0.7510548523206751
Threshold: 0.11
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5591414897365773
AUROC: 0.8312303769839183
AUPRC: 0.6838886406066996
Sensitivity: 0.7586206896551724
Specificity: 0.7586916742909423
Threshold: 0.11
Accuracy:  0.7586729538565173

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0011.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.572718283161521
AUROC: 0.8393578311331957
AUPRC: 0.703171960669841
Sensitivity: 0.7597292724196277
Specificity: 0.7742616033755274
Threshold: 0.13
Accuracy:  0.7699950322901142

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5360427717579171
AUROC: 0.8310927894800185
AUPRC: 0.6837766231905396
Sensitivity: 0.7681992337164751
Specificity: 0.7413083257090577
Threshold: 0.12
Accuracy:  0.7484001347254968

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0012.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5554882902652025
AUROC: 0.839543456583873
AUPRC: 0.7025808986389168
Sensitivity: 0.7529610829103215
Specificity: 0.779887482419128
Threshold: 0.15
Accuracy:  0.7719821162444114

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5194086562445823
AUROC: 0.8311826157166409
AUPRC: 0.6836967502307693
Sensitivity: 0.7630906768837803
Specificity: 0.7477127172918573
Threshold: 0.14
Accuracy:  0.7517682721455036

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0013.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5664937552064657
AUROC: 0.838555239040364
AUPRC: 0.7030461664744512
Sensitivity: 0.7698815566835872
Specificity: 0.7637130801687764
Threshold: 0.13
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5305301489982199
AUROC: 0.8303345245898958
AUPRC: 0.6833249910483183
Sensitivity: 0.7490421455938697
Specificity: 0.7634949679780421
Threshold: 0.13
Accuracy:  0.7596833950825194

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0014.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5636861398816109
AUROC: 0.8384558818279824
AUPRC: 0.7033509540350001
Sensitivity: 0.7495769881556683
Specificity: 0.7777777777777778
Threshold: 0.14
Accuracy:  0.76949826130154

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.5268635591293903
AUROC: 0.8300261211774919
AUPRC: 0.6826092026922864
Sensitivity: 0.7624521072796935
Specificity: 0.7470265324794144
Threshold: 0.13
Accuracy:  0.7510946446615022

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0015.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5880086869001389
AUROC: 0.8383011939524181
AUPRC: 0.7029807961414909
Sensitivity: 0.7749576988155669
Specificity: 0.7531645569620253
Threshold: 0.11
Accuracy:  0.7595628415300546

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5498430963526381
AUROC: 0.8294233360091328
AUPRC: 0.681087020521002
Sensitivity: 0.7515964240102171
Specificity: 0.757548032936871
Threshold: 0.11
Accuracy:  0.7559784439205119

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0016.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5478036012500525
AUROC: 0.8381988619731985
AUPRC: 0.703228801688845
Sensitivity: 0.751269035532995
Specificity: 0.7721518987341772
Threshold: 0.16
Accuracy:  0.7660208643815202

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5112032538398783
AUROC: 0.8293484077824868
AUPRC: 0.681147276389814
Sensitivity: 0.7573435504469987
Specificity: 0.7520585544373285
Threshold: 0.15
Accuracy:  0.7534523408555069

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0017.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5364330876618624
AUROC: 0.8376729231962798
AUPRC: 0.7038204640280816
Sensitivity: 0.7648054145516074
Specificity: 0.750351617440225
Threshold: 0.16
Accuracy:  0.754595131644312

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5062350985851694
AUROC: 0.828872912964073
AUPRC: 0.6804855650334659
Sensitivity: 0.7490421455938697
Specificity: 0.7596065873741995
Threshold: 0.16
Accuracy:  0.7568204782755137

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0018.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.6323976032435894
AUROC: 0.8372005302224412
AUPRC: 0.7029348018151926
Sensitivity: 0.7732656514382402
Specificity: 0.7475386779184248
Threshold: 0.09
Accuracy:  0.7550919026328863

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5895276944688026
AUROC: 0.8277712635498862
AUPRC: 0.6770375465027176
Sensitivity: 0.7490421455938697
Specificity: 0.7607502287282708
Threshold: 0.09
Accuracy:  0.7576625126305153

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0019.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.6144503857940435
AUROC: 0.8400610660136458
AUPRC: 0.704632839564616
Sensitivity: 0.7783417935702199
Specificity: 0.7426160337552743
Threshold: 0.1
Accuracy:  0.7531048186785891

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5742311629843204
AUROC: 0.8300767305937353
AUPRC: 0.6790803988308324
Sensitivity: 0.7547892720306514
Specificity: 0.7593778591033852
Threshold: 0.1
Accuracy:  0.7581677332435164

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0020.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5522953197360039
AUROC: 0.8368054811863846
AUPRC: 0.7035706520104023
Sensitivity: 0.7597292724196277
Specificity: 0.7573839662447257
Threshold: 0.15
Accuracy:  0.7580725285643318

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5177176477427178
AUROC: 0.8279780829824999
AUPRC: 0.6783906023106456
Sensitivity: 0.7611749680715197
Specificity: 0.7458828911253431
Threshold: 0.14
Accuracy:  0.7499157965644998

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0021.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5213427022099495
AUROC: 0.837133895445275
AUPRC: 0.7026413486563747
Sensitivity: 0.7614213197969543
Specificity: 0.760196905766526
Threshold: 0.19
Accuracy:  0.7605563835072032

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4941132214475185
AUROC: 0.8281937389798545
AUPRC: 0.6785246630382101
Sensitivity: 0.7503192848020435
Specificity: 0.7548032936870998
Threshold: 0.18
Accuracy:  0.7536207477265072

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0022.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5594766046851873
AUROC: 0.8390555948224778
AUPRC: 0.7035551051083372
Sensitivity: 0.7580372250423012
Specificity: 0.7665260196905767
Threshold: 0.14
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5272048134753045
AUROC: 0.8295956855363109
AUPRC: 0.6789200929963594
Sensitivity: 0.7547892720306514
Specificity: 0.752516010978957
Threshold: 0.13
Accuracy:  0.7531155271135063

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0023.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5690894909203053
AUROC: 0.8373504584710652
AUPRC: 0.7030449233848174
Sensitivity: 0.7648054145516074
Specificity: 0.7559774964838256
Threshold: 0.14
Accuracy:  0.7585692995529061

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5340299996289801
AUROC: 0.828046146439843
AUPRC: 0.676802066960694
Sensitivity: 0.7586206896551724
Specificity: 0.7456541628545288
Threshold: 0.13
Accuracy:  0.7490737622094982

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0024.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.5182801354676485
AUROC: 0.836582968626919
AUPRC: 0.7018453351874387
Sensitivity: 0.7698815566835872
Specificity: 0.749648382559775
Threshold: 0.19
Accuracy:  0.7555886736214605

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.4896412041593105
AUROC: 0.8281818351777654
AUPRC: 0.6793321965635746
Sensitivity: 0.7426564495530013
Specificity: 0.7602927721866423
Threshold: 0.19
Accuracy:  0.7556416301785113

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0025.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5413734465837479
AUROC: 0.8383797277969353
AUPRC: 0.7044457130838753
Sensitivity: 0.7631133671742809
Specificity: 0.760196905766526
Threshold: 0.16
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5110856243904601
AUROC: 0.8295111174208565
AUPRC: 0.6797363185788953
Sensitivity: 0.7547892720306514
Specificity: 0.7509149130832571
Threshold: 0.15
Accuracy:  0.7519366790165039

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0026.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.4969240818172693
AUROC: 0.839478011713442
AUPRC: 0.7039917823308588
Sensitivity: 0.7732656514382402
Specificity: 0.7559774964838256
Threshold: 0.22
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.4728366624801717
AUROC: 0.8310429833878427
AUPRC: 0.6838658943880112
Sensitivity: 0.7484035759897829
Specificity: 0.7607502287282708
Threshold: 0.22
Accuracy:  0.757494105759515

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0027.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5596782304346561
AUROC: 0.8364247110311493
AUPRC: 0.7031472902175793
Sensitivity: 0.7563451776649747
Specificity: 0.7735583684950773
Threshold: 0.14
Accuracy:  0.7685047193243915

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5267774729018516
AUROC: 0.827803031365277
AUPRC: 0.6758589117552961
Sensitivity: 0.7522349936143039
Specificity: 0.7566331198536139
Threshold: 0.13
Accuracy:  0.755473223307511

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0028.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5585334450006485
AUROC: 0.8367876325853578
AUPRC: 0.7031991783755777
Sensitivity: 0.7681895093062606
Specificity: 0.7461322081575246
Threshold: 0.13
Accuracy:  0.7526080476900149

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.5284838156497225
AUROC: 0.8281315909088255
AUPRC: 0.6765824427649264
Sensitivity: 0.7445721583652618
Specificity: 0.7641811527904849
Threshold: 0.13
Accuracy:  0.759009767598518

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0029.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.47829911671578884
AUROC: 0.8399658734748371
AUPRC: 0.7040287187819882
Sensitivity: 0.7614213197969543
Specificity: 0.7728551336146273
Threshold: 0.24
Accuracy:  0.76949826130154

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.45747334716167853
AUROC: 0.8311139680236124
AUPRC: 0.6833599158735215
Sensitivity: 0.7618135376756067
Specificity: 0.7481701738334858
Threshold: 0.23
Accuracy:  0.7517682721455036

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0030.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5520528722554445
AUROC: 0.8416924281474819
AUPRC: 0.7057655067865142
Sensitivity: 0.7715736040609137
Specificity: 0.7580872011251758
Threshold: 0.14
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5213562503774115
AUROC: 0.8323533510006205
AUPRC: 0.6790082321600199
Sensitivity: 0.7452107279693486
Specificity: 0.7689844464775847
Threshold: 0.14
Accuracy:  0.7627147187605254

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0031.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5458386410027742
AUROC: 0.8435671262086476
AUPRC: 0.7060601927055262
Sensitivity: 0.754653130287648
Specificity: 0.7728551336146273
Threshold: 0.15
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.514551542223768
AUROC: 0.8340810819811197
AUPRC: 0.6816495166768941
Sensitivity: 0.756066411238825
Specificity: 0.7573193046660567
Threshold: 0.14
Accuracy:  0.756988885146514

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0032.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5305797047913074
AUROC: 0.8396695866977946
AUPRC: 0.7063577458727512
Sensitivity: 0.7681895093062606
Specificity: 0.760196905766526
Threshold: 0.16
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5015344863876383
AUROC: 0.831350145299415
AUPRC: 0.6811044019609115
Sensitivity: 0.7630906768837803
Specificity: 0.7479414455626715
Threshold: 0.15
Accuracy:  0.7519366790165039

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0033.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5200664941221476
AUROC: 0.840992762987237
AUPRC: 0.7070282813032236
Sensitivity: 0.7749576988155669
Specificity: 0.7580872011251758
Threshold: 0.15
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.4926000652795142
AUROC: 0.8324958314783852
AUPRC: 0.6816128523726414
Sensitivity: 0.7496807151979565
Specificity: 0.765096065873742
Threshold: 0.15
Accuracy:  0.7610306500505221

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0034.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5273211933672428
AUROC: 0.843305941680291
AUPRC: 0.7091398373174945
Sensitivity: 0.754653130287648
Specificity: 0.7756680731364276
Threshold: 0.16
Accuracy:  0.76949826130154

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.49641540392916256
AUROC: 0.8338026206475903
AUPRC: 0.6832921443013683
Sensitivity: 0.7509578544061303
Specificity: 0.7605215004574566
Threshold: 0.15
Accuracy:  0.757999326372516

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0035.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5330711267888546
AUROC: 0.8371410348856857
AUPRC: 0.70311401696437
Sensitivity: 0.7681895093062606
Specificity: 0.7538677918424754
Threshold: 0.16
Accuracy:  0.7580725285643318

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5063500711892513
AUROC: 0.8285791154438029
AUPRC: 0.6754321621828007
Sensitivity: 0.756066411238825
Specificity: 0.7490850869167429
Threshold: 0.15
Accuracy:  0.7509262377905018

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0036.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5224865265190601
AUROC: 0.8344280475296346
AUPRC: 0.7011210001539068
Sensitivity: 0.7631133671742809
Specificity: 0.7419127988748242
Threshold: 0.16
Accuracy:  0.7481371087928465

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.49592350518449824
AUROC: 0.8264868213956456
AUPRC: 0.6745482636012463
Sensitivity: 0.7445721583652618
Specificity: 0.7609789569990851
Threshold: 0.16
Accuracy:  0.7566520714045133

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0037.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5610541682690382
AUROC: 0.8402919079202572
AUPRC: 0.7059299243765776
Sensitivity: 0.766497461928934
Specificity: 0.7587904360056259
Threshold: 0.12
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5305515068008545
AUROC: 0.8315519256992425
AUPRC: 0.6795322370637397
Sensitivity: 0.7669220945083014
Specificity: 0.7454254345837146
Threshold: 0.11
Accuracy:  0.7510946446615022

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0038.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5124184023588896
AUROC: 0.8391829148431347
AUPRC: 0.7064543785815021
Sensitivity: 0.7631133671742809
Specificity: 0.7609001406469761
Threshold: 0.16
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.4849312020109055
AUROC: 0.8313445950604041
AUPRC: 0.6799037929885133
Sensitivity: 0.7586206896551724
Specificity: 0.752516010978957
Threshold: 0.15
Accuracy:  0.7541259683395083

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0039.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.4887825846672058
AUROC: 0.8408452145520835
AUPRC: 0.7065306829541897
Sensitivity: 0.751269035532995
Specificity: 0.7735583684950773
Threshold: 0.2
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.46476046487371975
AUROC: 0.8319205053872373
AUPRC: 0.6816518886638873
Sensitivity: 0.7458492975734355
Specificity: 0.7630375114364135
Threshold: 0.19
Accuracy:  0.7585045469855171

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0040.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.4919173773378134
AUROC: 0.8455846130780269
AUPRC: 0.7101063157929499
Sensitivity: 0.7614213197969543
Specificity: 0.7665260196905767
Threshold: 0.21
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.46680755405984026
AUROC: 0.8353372617340816
AUPRC: 0.6829512961677466
Sensitivity: 0.7650063856960408
Specificity: 0.7461116193961573
Threshold: 0.2
Accuracy:  0.7510946446615022

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0041.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.4984473157674074
AUROC: 0.8451776649746193
AUPRC: 0.7096486082882708
Sensitivity: 0.766497461928934
Specificity: 0.7644163150492265
Threshold: 0.17
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4735477753776185
AUROC: 0.8353738494938766
AUPRC: 0.6834031409239039
Sensitivity: 0.7605363984674329
Specificity: 0.7573193046660567
Threshold: 0.16
Accuracy:  0.7581677332435164

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0042.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.534514544531703
AUROC: 0.8389996692059276
AUPRC: 0.7070831522428234
Sensitivity: 0.7597292724196277
Specificity: 0.7658227848101266
Threshold: 0.16
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5057047259934405
AUROC: 0.8310793520592555
AUPRC: 0.6795977031008814
Sensitivity: 0.756066411238825
Specificity: 0.7564043915827996
Threshold: 0.15
Accuracy:  0.7563152576625126

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0043.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.4921764899045229
AUROC: 0.8462866580517419
AUPRC: 0.7095219249887756
Sensitivity: 0.7648054145516074
Specificity: 0.770745428973277
Threshold: 0.19
Accuracy:  0.7690014903129657

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.46912682563700575
AUROC: 0.8362518096700354
AUPRC: 0.6844613113192448
Sensitivity: 0.7535121328224776
Specificity: 0.7582342177493138
Threshold: 0.18
Accuracy:  0.756988885146514

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0044.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.4797909054905176
AUROC: 0.8382202802944305
AUPRC: 0.7043416060195302
Sensitivity: 0.7648054145516074
Specificity: 0.7573839662447257
Threshold: 0.18
Accuracy:  0.7595628415300546

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.4562937874109187
AUROC: 0.8302623714827551
AUPRC: 0.6783201640550081
Sensitivity: 0.7567049808429118
Specificity: 0.7529734675205856
Threshold: 0.17
Accuracy:  0.7539575614685079

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0045.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.511174937710166
AUROC: 0.8398504525215313
AUPRC: 0.7072177572474374
Sensitivity: 0.754653130287648
Specificity: 0.7693389592123769
Threshold: 0.17
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.48412304672789064
AUROC: 0.8309372367287943
AUPRC: 0.6789103663114289
Sensitivity: 0.7477650063856961
Specificity: 0.7644098810612991
Threshold: 0.16
Accuracy:  0.76002020882452

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0046.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.48806891962885857
AUROC: 0.8457202624458294
AUPRC: 0.7100289766911093
Sensitivity: 0.7698815566835872
Specificity: 0.7587904360056259
Threshold: 0.18
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.4668007993951757
AUROC: 0.835293736175523
AUPRC: 0.6840659421635563
Sensitivity: 0.7605363984674329
Specificity: 0.7518298261665142
Threshold: 0.17
Accuracy:  0.7541259683395083

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0047.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.4699779059737921
AUROC: 0.8472528623206513
AUPRC: 0.7070600902820107
Sensitivity: 0.7681895093062606
Specificity: 0.7651195499296765
Threshold: 0.22
Accuracy:  0.7660208643815202

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.44828000728120193
AUROC: 0.8372542850766341
AUPRC: 0.6862978428115737
Sensitivity: 0.7484035759897829
Specificity: 0.7692131747483989
Threshold: 0.22
Accuracy:  0.7637251599865275

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0048.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
Loss: 0.49783168360590935
AUROC: 0.8404204178476491
AUPRC: 0.7087968621316504
Sensitivity: 0.7715736040609137
Specificity: 0.7566807313642757
Threshold: 0.18
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.4699046916150032
AUROC: 0.8325687148801324
AUPRC: 0.6820529442106928
Sensitivity: 0.7630906768837803
Specificity: 0.7479414455626715
Threshold: 0.17
Accuracy:  0.7519366790165039

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0049.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.46606326289474964
AUROC: 0.8435391634003727
AUPRC: 0.7097552817795016
Sensitivity: 0.7698815566835872
Specificity: 0.7573839662447257
Threshold: 0.23
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.44571863431879816
AUROC: 0.8343311348544493
AUPRC: 0.6831077696035767
Sensitivity: 0.7650063856960408
Specificity: 0.7454254345837146
Threshold: 0.22
Accuracy:  0.7505894240485012

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0050.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.47092502377927303
AUROC: 0.8420303616602531
AUPRC: 0.7078393997883271
Sensitivity: 0.7597292724196277
Specificity: 0.770042194092827
Threshold: 0.2
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.4479425980689678
AUROC: 0.833274763705877
AUPRC: 0.6822382302087866
Sensitivity: 0.7509578544061303
Specificity: 0.7598353156450137
Threshold: 0.19
Accuracy:  0.757494105759515

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0051.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.45012721233069897
AUROC: 0.8451217393580691
AUPRC: 0.7099539722445672
Sensitivity: 0.7681895093062606
Specificity: 0.7552742616033755
Threshold: 0.24
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.4314588742687347
AUROC: 0.8349128145086753
AUPRC: 0.6828341388779533
Sensitivity: 0.7547892720306514
Specificity: 0.7607502287282708
Threshold: 0.24
Accuracy:  0.7591781744695184

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0052.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.4550383687019348
AUROC: 0.8406774377024329
AUPRC: 0.7083423361758092
Sensitivity: 0.7715736040609137
Specificity: 0.7489451476793249
Threshold: 0.25
Accuracy:  0.7555886736214605

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.4355377329790846
AUROC: 0.8319814119574348
AUPRC: 0.6819340540611901
Sensitivity: 0.7522349936143039
Specificity: 0.760064043915828
Threshold: 0.25
Accuracy:  0.757999326372516

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0053.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
Loss: 0.4659064430743456
AUROC: 0.8432702444782377
AUPRC: 0.7087378897136032
Sensitivity: 0.7495769881556683
Specificity: 0.7728551336146273
Threshold: 0.24
Accuracy:  0.7660208643815202

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.4468767202280937
AUROC: 0.8336619659063422
AUPRC: 0.6824362662936432
Sensitivity: 0.7528735632183908
Specificity: 0.757548032936871
Threshold: 0.23
Accuracy:  0.7563152576625126

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0054.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5534097757190466
AUROC: 0.8409487364380381
AUPRC: 0.7093610517127196
Sensitivity: 0.7681895093062606
Specificity: 0.7587904360056259
Threshold: 0.1
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.5174458188579437
AUROC: 0.8321366725908164
AUPRC: 0.6796645763046643
Sensitivity: 0.7503192848020435
Specificity: 0.7655535224153706
Threshold: 0.1
Accuracy:  0.761535870663523

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0055.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5105922631919384
AUROC: 0.8425420215563505
AUPRC: 0.7114061773634157
Sensitivity: 0.7698815566835872
Specificity: 0.7580872011251758
Threshold: 0.14
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.48093417255168264
AUROC: 0.83415973471026
AUPRC: 0.682488479228398
Sensitivity: 0.7528735632183908
Specificity: 0.7641811527904849
Threshold: 0.14
Accuracy:  0.7611990569215223

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0056.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.4677225463092327
AUROC: 0.8481333933046328
AUPRC: 0.7129242186061564
Sensitivity: 0.7580372250423012
Specificity: 0.770042194092827
Threshold: 0.21
Accuracy:  0.7665176353700944

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.4473597851205379
AUROC: 0.8363596741834429
AUPRC: 0.6850139162887978
Sensitivity: 0.7573435504469987
Specificity: 0.7541171088746569
Threshold: 0.2
Accuracy:  0.7549680026945099

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0057.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.4805305376648903
AUROC: 0.8452799969538388
AUPRC: 0.7118244442462639
Sensitivity: 0.7698815566835872
Specificity: 0.7637130801687764
Threshold: 0.18
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.4580880127688672
AUROC: 0.8348843330190144
AUPRC: 0.6843539476757631
Sensitivity: 0.7592592592592593
Specificity: 0.7481701738334858
Threshold: 0.17
Accuracy:  0.7510946446615022

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0058.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5041318666189909
AUROC: 0.8414175596916714
AUPRC: 0.7089710197800071
Sensitivity: 0.7563451776649747
Specificity: 0.7658227848101266
Threshold: 0.17
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.4775675868100308
AUROC: 0.83265036181716
AUPRC: 0.6808925743193961
Sensitivity: 0.7496807151979565
Specificity: 0.760064043915828
Threshold: 0.16
Accuracy:  0.7573256988885146

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0059.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.4698580726981163
AUROC: 0.8413973312771745
AUPRC: 0.709149653008773
Sensitivity: 0.7648054145516074
Specificity: 0.7580872011251758
Threshold: 0.21
Accuracy:  0.7600596125186289

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4480456643282099
AUROC: 0.8324984605389691
AUPRC: 0.6810588899243145
Sensitivity: 0.7458492975734355
Specificity: 0.7666971637694419
Threshold: 0.21
Accuracy:  0.7611990569215223

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0060.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5066906604915857
AUROC: 0.8423706749864945
AUPRC: 0.7115096807816097
Sensitivity: 0.7681895093062606
Specificity: 0.7637130801687764
Threshold: 0.16
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.4785267469730783
AUROC: 0.8342320338763219
AUPRC: 0.6826690889097311
Sensitivity: 0.7586206896551724
Specificity: 0.7552607502287283
Threshold: 0.15
Accuracy:  0.7561468507915123

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0061.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5053694043308496
AUROC: 0.8426538727894508
AUPRC: 0.7095927069259095
Sensitivity: 0.7563451776649747
Specificity: 0.7686357243319268
Threshold: 0.14
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.477945734845831
AUROC: 0.8333372769242094
AUPRC: 0.6798350748370718
Sensitivity: 0.7535121328224776
Specificity: 0.7616651418115279
Threshold: 0.13
Accuracy:  0.7595149882115191

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0062.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.4672814477235079
AUROC: 0.8483856535324761
AUPRC: 0.7110809510912552
Sensitivity: 0.7478849407783418
Specificity: 0.7841068917018285
Threshold: 0.24
Accuracy:  0.7734724292101341

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.4458574570239858
AUROC: 0.8377536605286865
AUPRC: 0.6875159857858222
Sensitivity: 0.7733077905491699
Specificity: 0.7479414455626715
Threshold: 0.23
Accuracy:  0.7546311889525092

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0063.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.505318496376276
AUROC: 0.8496148271898449
AUPRC: 0.7173617896597516
Sensitivity: 0.7698815566835872
Specificity: 0.7587904360056259
Threshold: 0.13
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.4778517763665382
AUROC: 0.8374305781946884
AUPRC: 0.6835437904079139
Sensitivity: 0.7515964240102171
Specificity: 0.7669258920402562
Threshold: 0.13
Accuracy:  0.7628831256315257

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0064.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.4783692695200443
AUROC: 0.8496421950447524
AUPRC: 0.7166632784086057
Sensitivity: 0.7800338409475466
Specificity: 0.7390998593530239
Threshold: 0.23
Accuracy:  0.7511177347242921

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.45459363561995486
AUROC: 0.8384763600714636
AUPRC: 0.6848217940857141
Sensitivity: 0.7630906768837803
Specificity: 0.752516010978957
Threshold: 0.23
Accuracy:  0.7553048164365106

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0065.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.49407162331044674
AUROC: 0.8458487723732213
AUPRC: 0.7135048922284319
Sensitivity: 0.7631133671742809
Specificity: 0.7609001406469761
Threshold: 0.15
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.4660486823066752
AUROC: 0.8355779668364455
AUPRC: 0.6837657719481365
Sensitivity: 0.7573435504469987
Specificity: 0.755032021957914
Threshold: 0.14
Accuracy:  0.7556416301785113

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0066.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.4571539629250765
AUROC: 0.8453502014512102
AUPRC: 0.7133069445258139
Sensitivity: 0.7529610829103215
Specificity: 0.770042194092827
Threshold: 0.21
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4365758515418844
AUROC: 0.8344219835035211
AUPRC: 0.6821423860702955
Sensitivity: 0.7579821200510856
Specificity: 0.7445105215004575
Threshold: 0.19
Accuracy:  0.7480633209834961

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0067.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5746780391782522
AUROC: 0.847735964455106
AUPRC: 0.7156660659944717
Sensitivity: 0.7631133671742809
Specificity: 0.7763713080168776
Threshold: 0.09
Accuracy:  0.7724788872329856

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.5392758434123182
AUROC: 0.8374216686004868
AUPRC: 0.6834453402037401
Sensitivity: 0.7701149425287356
Specificity: 0.747483989021043
Threshold: 0.08
Accuracy:  0.7534523408555069

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0068.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.46719955652952194
AUROC: 0.8424682473387736
AUPRC: 0.7103825540333577
Sensitivity: 0.7563451776649747
Specificity: 0.7644163150492265
Threshold: 0.2
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.4449224700319006
AUROC: 0.8330485914661862
AUPRC: 0.6785472524980224
Sensitivity: 0.7598978288633461
Specificity: 0.7495425434583715
Threshold: 0.18
Accuracy:  0.7522734927585045

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0069.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.46456499211490154
AUROC: 0.8486581421748165
AUPRC: 0.716370847374955
Sensitivity: 0.7529610829103215
Specificity: 0.7735583684950773
Threshold: 0.22
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4435988376115231
AUROC: 0.836959830291218
AUPRC: 0.6863693108009181
Sensitivity: 0.7618135376756067
Specificity: 0.7470265324794144
Threshold: 0.21
Accuracy:  0.7509262377905018

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0070.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.4652945641428232
AUROC: 0.847044628642007
AUPRC: 0.7161263714611706
Sensitivity: 0.7648054145516074
Specificity: 0.7573839662447257
Threshold: 0.22
Accuracy:  0.7595628415300546

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.44567582835542396
AUROC: 0.8355601476480423
AUPRC: 0.684579725023421
Sensitivity: 0.7592592592592593
Specificity: 0.7516010978957
Threshold: 0.21
Accuracy:  0.7536207477265072

Intermediate Model:
  ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0071.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.4513649269938469
AUROC: 0.8455346369951524
AUPRC: 0.7157240889428991
Sensitivity: 0.7580372250423012
Specificity: 0.7623066104078763
Threshold: 0.23
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.4327330582953514
AUROC: 0.834559351919039
AUPRC: 0.6839379545365787
Sensitivity: 0.756066411238825
Specificity: 0.7566331198536139
Threshold: 0.22
Accuracy:  0.756483664533513


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:      51, 0.4497
  Epoch with best model Test AUROC:     64, 0.8385
  Epoch with best model Test Accuracy:   2, 0.7661

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:   51, 0.4497
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0051.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.61it/s]
Loss: 0.4314588742687347
AUROC: 0.8349128145086753
AUPRC: 0.6828341388779533
Sensitivity: 0.7547892720306514
Specificity: 0.7607502287282708
Threshold: 0.24
Accuracy:  0.7591781744695184
best_model_val_test_auroc: 0.8349128145086753
best_model_val_test_auprc: 0.6828341388779533

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:  64, 0.8385
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/ABP_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_78ab3604_0064.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.45459363561995486
AUROC: 0.8384763600714636
AUPRC: 0.6848217940857141
Sensitivity: 0.7630906768837803
Specificity: 0.752516010978957
Threshold: 0.23
Accuracy:  0.7553048164365106
best_model_auroc_test_auroc: 0.8384763600714636
best_model_auroc_test_auprc: 0.6848217940857141

Total Processing Time: 5463.5540 sec
Experiment Setup
  name:              ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           True
  use_eeg:           False
  use_ecg:           True
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        200
  patience:          20
  device:            mps

Model Architecture
HypotensionCNN(
  (abpResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (abpFc): Linear(in_features=2814, out_features=32, bias=True)
  (ecgResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (ecgFc): Linear(in_features=2814, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=64, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 14:46:19.208397] Completed epoch 0 with training loss 0.50498545, validation loss 0.65604651
Validation loss improved to 0.65604651. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 14:47:20.583977] Completed epoch 1 with training loss 0.43999344, validation loss 0.63263452
Validation loss improved to 0.63263452. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 14:48:21.536700] Completed epoch 2 with training loss 0.43480554, validation loss 0.65606368
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 14:49:22.548006] Completed epoch 3 with training loss 0.43582305, validation loss 0.68558502
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 14:50:23.738835] Completed epoch 4 with training loss 0.43498424, validation loss 0.66000211
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 14:51:24.838590] Completed epoch 5 with training loss 0.43199125, validation loss 0.65474963
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 14:52:26.142472] Completed epoch 6 with training loss 0.43400213, validation loss 0.58581859
Validation loss improved to 0.58581859. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 14:53:26.968577] Completed epoch 7 with training loss 0.42922172, validation loss 0.53293931
Validation loss improved to 0.53293931. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 14:54:28.176763] Completed epoch 8 with training loss 0.43213928, validation loss 0.56339163
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 14:55:29.118595] Completed epoch 9 with training loss 0.42984292, validation loss 0.62532485
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 14:56:30.033518] Completed epoch 10 with training loss 0.43032616, validation loss 0.57626355
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-05 14:57:31.107413] Completed epoch 11 with training loss 0.43017220, validation loss 0.51595700
Validation loss improved to 0.51595700. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
[2024-05-05 14:58:32.666198] Completed epoch 12 with training loss 0.43009663, validation loss 0.54033792
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 14:59:33.489700] Completed epoch 13 with training loss 0.42870483, validation loss 0.54225940
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.70it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:00:33.933125] Completed epoch 14 with training loss 0.42902991, validation loss 0.59155488
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 15:01:34.740867] Completed epoch 15 with training loss 0.42700246, validation loss 0.51588315
Validation loss improved to 0.51588315. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 15:02:35.539766] Completed epoch 16 with training loss 0.42837229, validation loss 0.55101240
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 15:03:36.629950] Completed epoch 17 with training loss 0.42580029, validation loss 0.51659429
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:04:37.552219] Completed epoch 18 with training loss 0.42575330, validation loss 0.56082994
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 15:05:38.598288] Completed epoch 19 with training loss 0.42520151, validation loss 0.56318659
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:06:39.547786] Completed epoch 20 with training loss 0.42429426, validation loss 0.54654109
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 15:07:40.472223] Completed epoch 21 with training loss 0.42508158, validation loss 0.59208262
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 15:08:41.456089] Completed epoch 22 with training loss 0.42568356, validation loss 0.55557621
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 15:09:42.596877] Completed epoch 23 with training loss 0.42564866, validation loss 0.52702165
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 15:10:43.470694] Completed epoch 24 with training loss 0.42459682, validation loss 0.53026712
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:11:44.512849] Completed epoch 25 with training loss 0.42454708, validation loss 0.59229791
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 15:12:45.337309] Completed epoch 26 with training loss 0.42587399, validation loss 0.55112934
No improvement in validation loss. 11 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:13:46.544207] Completed epoch 27 with training loss 0.42299041, validation loss 0.53884268
No improvement in validation loss. 12 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 15:14:47.628440] Completed epoch 28 with training loss 0.42259321, validation loss 0.52523547
No improvement in validation loss. 13 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 15:15:48.776202] Completed epoch 29 with training loss 0.42402053, validation loss 0.51121169
Validation loss improved to 0.51121169. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
[2024-05-05 15:16:50.406483] Completed epoch 30 with training loss 0.42323110, validation loss 0.52256107
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
[2024-05-05 15:17:52.800736] Completed epoch 31 with training loss 0.42327175, validation loss 0.46653870
Validation loss improved to 0.46653870. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:18:55.173290] Completed epoch 32 with training loss 0.42308858, validation loss 0.49372268
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-05 15:19:57.491347] Completed epoch 33 with training loss 0.42419401, validation loss 0.53523898
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.66it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-05 15:20:59.602021] Completed epoch 34 with training loss 0.42194378, validation loss 0.49862593
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:22:01.957962] Completed epoch 35 with training loss 0.42147192, validation loss 0.50524938
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:23:04.149222] Completed epoch 36 with training loss 0.42207971, validation loss 0.49475065
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-05 15:24:06.345907] Completed epoch 37 with training loss 0.42117956, validation loss 0.51047742
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:25:08.574422] Completed epoch 38 with training loss 0.42298856, validation loss 0.51264262
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-05 15:26:10.815805] Completed epoch 39 with training loss 0.42178363, validation loss 0.51320648
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:27:13.049512] Completed epoch 40 with training loss 0.41978294, validation loss 0.55431545
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
[2024-05-05 15:28:15.432511] Completed epoch 41 with training loss 0.42340863, validation loss 0.47529638
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.66it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:29:17.422402] Completed epoch 42 with training loss 0.42299610, validation loss 0.52938902
No improvement in validation loss. 11 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-05 15:30:19.657115] Completed epoch 43 with training loss 0.42181048, validation loss 0.45224929
Validation loss improved to 0.45224929. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:31:21.930095] Completed epoch 44 with training loss 0.42233571, validation loss 0.55160177
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.62it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:32:25.431308] Completed epoch 45 with training loss 0.42368934, validation loss 0.51528203
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
[2024-05-05 15:33:27.614197] Completed epoch 46 with training loss 0.42154896, validation loss 0.48239201
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:34:29.946985] Completed epoch 47 with training loss 0.42218804, validation loss 0.46249142
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.66it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
[2024-05-05 15:35:32.198115] Completed epoch 48 with training loss 0.42100957, validation loss 0.47240865
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
[2024-05-05 15:36:34.454821] Completed epoch 49 with training loss 0.42089432, validation loss 0.48055780
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.63it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.38it/s]
[2024-05-05 15:37:37.615930] Completed epoch 50 with training loss 0.42338774, validation loss 0.47125405
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.62it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-05 15:38:40.952042] Completed epoch 51 with training loss 0.42272758, validation loss 0.48128903
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:39:43.200854] Completed epoch 52 with training loss 0.42104650, validation loss 0.45767951
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:40:45.558301] Completed epoch 53 with training loss 0.42184085, validation loss 0.54521340
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:41:48.077343] Completed epoch 54 with training loss 0.42082143, validation loss 0.48014450
No improvement in validation loss. 11 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
[2024-05-05 15:42:50.343780] Completed epoch 55 with training loss 0.42233393, validation loss 0.48172212
No improvement in validation loss. 12 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-05 15:43:52.776932] Completed epoch 56 with training loss 0.42023593, validation loss 0.48445651
No improvement in validation loss. 13 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 15:44:54.959817] Completed epoch 57 with training loss 0.42171586, validation loss 0.44993776
Validation loss improved to 0.44993776. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-05 15:45:57.142895] Completed epoch 58 with training loss 0.42109689, validation loss 0.48787540
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
[2024-05-05 15:46:59.542195] Completed epoch 59 with training loss 0.42043209, validation loss 0.53564143
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:48:00.618377] Completed epoch 60 with training loss 0.42057875, validation loss 0.49471796
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:49:01.385146] Completed epoch 61 with training loss 0.42065760, validation loss 0.51703179
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:50:02.407339] Completed epoch 62 with training loss 0.42130998, validation loss 0.47934967
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 15:51:03.418184] Completed epoch 63 with training loss 0.42068046, validation loss 0.49916595
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:52:04.435955] Completed epoch 64 with training loss 0.42079812, validation loss 0.47958332
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 15:53:05.404218] Completed epoch 65 with training loss 0.42024612, validation loss 0.47632593
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
[2024-05-05 15:54:07.063644] Completed epoch 66 with training loss 0.42025629, validation loss 0.58881962
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.66it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:55:08.704576] Completed epoch 67 with training loss 0.41876423, validation loss 0.47444618
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 15:56:09.655354] Completed epoch 68 with training loss 0.42008561, validation loss 0.48302275
No improvement in validation loss. 11 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 15:57:10.645311] Completed epoch 69 with training loss 0.42086312, validation loss 0.53391635
No improvement in validation loss. 12 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:58:11.756304] Completed epoch 70 with training loss 0.41998270, validation loss 0.53646111
No improvement in validation loss. 13 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 15:59:12.630254] Completed epoch 71 with training loss 0.42137939, validation loss 0.56596184
No improvement in validation loss. 14 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 16:00:13.737147] Completed epoch 72 with training loss 0.42178515, validation loss 0.52964258
No improvement in validation loss. 15 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 16:01:15.304360] Completed epoch 73 with training loss 0.42076334, validation loss 0.46833372
No improvement in validation loss. 16 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 16:02:16.318983] Completed epoch 74 with training loss 0.42131561, validation loss 0.46755123
No improvement in validation loss. 17 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.69it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 16:03:17.262094] Completed epoch 75 with training loss 0.41985542, validation loss 0.47097152
No improvement in validation loss. 18 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.67it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 16:04:18.735534] Completed epoch 76 with training loss 0.41950101, validation loss 0.47508821
No improvement in validation loss. 19 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:54<00:00,  1.68it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 16:05:19.696123] Completed epoch 77 with training loss 0.41985193, validation loss 0.56968719
No improvement in validation loss. 20 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:   57, 0.4499
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.80it/s]
Loss: 0.6535012423992157
AUROC: 0.8404608746766429
AUPRC: 0.6946058394528061
Sensitivity: 0.7800338409475466
Specificity: 0.7552742616033755
Threshold: 0.1
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.6088970915434209
AUROC: 0.8273402436730197
AUPRC: 0.672361655613213
Sensitivity: 0.7420178799489144
Specificity: 0.7639524245196706
Threshold: 0.1
Accuracy:  0.7581677332435164

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
Loss: 0.6333339009433985
AUROC: 0.842398042841402
AUPRC: 0.7004776993668733
Sensitivity: 0.7631133671742809
Specificity: 0.7693389592123769
Threshold: 0.11
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.58it/s]
Loss: 0.5861662711868895
AUROC: 0.8310979015422655
AUPRC: 0.6795841821167224
Sensitivity: 0.780970625798212
Specificity: 0.7289569990850869
Threshold: 0.1
Accuracy:  0.7426743011114854

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.6544789336621761
AUROC: 0.8415591585931494
AUPRC: 0.7037236468091911
Sensitivity: 0.7461928934010152
Specificity: 0.7862165963431786
Threshold: 0.1
Accuracy:  0.7744659711872827

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.57it/s]
Loss: 0.6061533093452454
AUROC: 0.8315784353934651
AUPRC: 0.6831008737409653
Sensitivity: 0.7745849297573435
Specificity: 0.7378774016468436
Threshold: 0.09
Accuracy:  0.7475581003704951

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.6899694837629795
AUROC: 0.8413402157538893
AUPRC: 0.7057041583322418
Sensitivity: 0.7597292724196277
Specificity: 0.7735583684950773
Threshold: 0.08
Accuracy:  0.76949826130154

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.59it/s]
Loss: 0.6351195418454231
AUROC: 0.8320478687666434
AUPRC: 0.684433492045463
Sensitivity: 0.7343550446998723
Specificity: 0.7797346752058555
Threshold: 0.08
Accuracy:  0.7677669248905356

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.6583978459239006
AUROC: 0.8417900004997607
AUPRC: 0.7065928975138287
Sensitivity: 0.7648054145516074
Specificity: 0.770042194092827
Threshold: 0.09
Accuracy:  0.7685047193243915

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.57it/s]
Loss: 0.6096689704250782
AUROC: 0.8325223411726078
AUPRC: 0.6848363531218693
Sensitivity: 0.7394636015325671
Specificity: 0.7728728270814272
Threshold: 0.09
Accuracy:  0.7640619737285281

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.6536137238144875
AUROC: 0.8410403592566413
AUPRC: 0.7056546680014295
Sensitivity: 0.7749576988155669
Specificity: 0.7623066104078763
Threshold: 0.09
Accuracy:  0.7660208643815202

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.59it/s]
Loss: 0.60408309292286
AUROC: 0.8320553177716317
AUPRC: 0.6846365836010202
Sensitivity: 0.7452107279693486
Specificity: 0.7676120768526989
Threshold: 0.09
Accuracy:  0.7617042775345234

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.5847087446600199
AUROC: 0.841198616852411
AUPRC: 0.7044386741735507
Sensitivity: 0.7445008460236887
Specificity: 0.7876230661040787
Threshold: 0.14
Accuracy:  0.7749627421758569

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.57it/s]
Loss: 0.5426248584656005
AUROC: 0.8323240661868923
AUPRC: 0.6841476237703032
Sensitivity: 0.7618135376756067
Specificity: 0.7520585544373285
Threshold: 0.13
Accuracy:  0.7546311889525092

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.5316014252603054
AUROC: 0.841283100230604
AUPRC: 0.7022262558418142
Sensitivity: 0.7783417935702199
Specificity: 0.759493670886076
Threshold: 0.18
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.59it/s]
Loss: 0.5007957911237757
AUROC: 0.8322847763370526
AUPRC: 0.6828384361622041
Sensitivity: 0.7477650063856961
Specificity: 0.7682982616651418
Threshold: 0.18
Accuracy:  0.7628831256315257

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.5632565077394247
AUROC: 0.840582245163624
AUPRC: 0.7029017653756217
Sensitivity: 0.7749576988155669
Specificity: 0.7559774964838256
Threshold: 0.14
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.5263269727534436
AUROC: 0.8319136406179344
AUPRC: 0.6835689375919902
Sensitivity: 0.7509578544061303
Specificity: 0.7637236962488564
Threshold: 0.14
Accuracy:  0.7603570225665207

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0009.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.07it/s]
Loss: 0.626797504723072
AUROC: 0.8397719186770142
AUPRC: 0.7042022759511328
Sensitivity: 0.766497461928934
Specificity: 0.7644163150492265
Threshold: 0.1
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.5793282319890692
AUROC: 0.8312772618976675
AUPRC: 0.6836911555439789
Sensitivity: 0.7471264367816092
Specificity: 0.7694419030192132
Threshold: 0.1
Accuracy:  0.7635567531155271

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0010.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.5781942028552294
AUROC: 0.839998000956685
AUPRC: 0.7032080393772504
Sensitivity: 0.754653130287648
Specificity: 0.7721518987341772
Threshold: 0.13
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.49it/s]
Loss: 0.5380023587891396
AUROC: 0.8313797952604465
AUPRC: 0.6833873327303264
Sensitivity: 0.7694763729246488
Specificity: 0.7410795974382434
Threshold: 0.12
Accuracy:  0.7485685415964971

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0011.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
Loss: 0.5159286521375179
AUROC: 0.840084864148348
AUPRC: 0.7007002168891437
Sensitivity: 0.7749576988155669
Specificity: 0.7587904360056259
Threshold: 0.2
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.59it/s]
Loss: 0.4863262411127699
AUROC: 0.8314867103908654
AUPRC: 0.6830240333766859
Sensitivity: 0.7484035759897829
Specificity: 0.7673833485818847
Threshold: 0.2
Accuracy:  0.7623779050185248

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0012.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5414625033736229
AUROC: 0.8391829148431347
AUPRC: 0.7022722772261524
Sensitivity: 0.7817258883248731
Specificity: 0.7489451476793249
Threshold: 0.16
Accuracy:  0.7585692995529061

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.5070193920363771
AUROC: 0.8305133737390733
AUPRC: 0.6826926530879678
Sensitivity: 0.7522349936143039
Specificity: 0.7573193046660567
Threshold: 0.16
Accuracy:  0.7559784439205119

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0013.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.29it/s]
Loss: 0.5404961053282022
AUROC: 0.8392007634441611
AUPRC: 0.7030152482754418
Sensitivity: 0.7614213197969543
Specificity: 0.770745428973277
Threshold: 0.15
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.58it/s]
Loss: 0.5077381365476771
AUROC: 0.8305982339723704
AUPRC: 0.6829923088242594
Sensitivity: 0.7630906768837803
Specificity: 0.744053064958829
Threshold: 0.14
Accuracy:  0.7490737622094982

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0014.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5910786129534245
AUROC: 0.8391020011851471
AUPRC: 0.7039763298004164
Sensitivity: 0.751269035532995
Specificity: 0.7819971870604782
Threshold: 0.12
Accuracy:  0.7729756582215599

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.55it/s]
Loss: 0.5501647017737652
AUROC: 0.8298781634901773
AUPRC: 0.6812791195728082
Sensitivity: 0.7662835249042146
Specificity: 0.7493138151875571
Threshold: 0.11
Accuracy:  0.7537891545975076

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0015.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5177158359438181
AUROC: 0.8383535498487629
AUPRC: 0.703369698699829
Sensitivity: 0.751269035532995
Specificity: 0.7693389592123769
Threshold: 0.19
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.58it/s]
Loss: 0.4876107028190126
AUROC: 0.8299353455578807
AUPRC: 0.6829311579017537
Sensitivity: 0.7592592592592593
Specificity: 0.7509149130832571
Threshold: 0.18
Accuracy:  0.7531155271135063

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0016.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.5513972025364637
AUROC: 0.8383523599420276
AUPRC: 0.7035576330041635
Sensitivity: 0.754653130287648
Specificity: 0.7749648382559775
Threshold: 0.15
Accuracy:  0.7690014903129657

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.5173078356905186
AUROC: 0.8290727945979232
AUPRC: 0.6802121152875231
Sensitivity: 0.7567049808429118
Specificity: 0.7536596523330283
Threshold: 0.14
Accuracy:  0.754462782081509

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0017.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.5143855586647987
AUROC: 0.8381310372892972
AUPRC: 0.7043014297305192
Sensitivity: 0.7580372250423012
Specificity: 0.7609001406469761
Threshold: 0.21
Accuracy:  0.7600596125186289

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.59it/s]
Loss: 0.48657732567888623
AUROC: 0.8295378462034613
AUPRC: 0.6815140705938464
Sensitivity: 0.7624521072796935
Specificity: 0.747483989021043
Threshold: 0.2
Accuracy:  0.7514314584035029

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0018.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.561011228710413
AUROC: 0.8372909631343095
AUPRC: 0.7036661166720477
Sensitivity: 0.754653130287648
Specificity: 0.7686357243319268
Threshold: 0.15
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.58it/s]
Loss: 0.5266565651969707
AUROC: 0.8278987729882137
AUPRC: 0.6778214148099075
Sensitivity: 0.7528735632183908
Specificity: 0.7538883806038427
Threshold: 0.14
Accuracy:  0.7536207477265072

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0019.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5669955424964428
AUROC: 0.8362188571659754
AUPRC: 0.7030558119224587
Sensitivity: 0.766497461928934
Specificity: 0.750351617440225
Threshold: 0.14
Accuracy:  0.7550919026328863

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.59it/s]
Loss: 0.5309801434582853
AUROC: 0.8269462497327121
AUPRC: 0.6769991199596572
Sensitivity: 0.7445721583652618
Specificity: 0.7628087831655993
Threshold: 0.14
Accuracy:  0.757999326372516

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0020.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.5490618888288736
AUROC: 0.8400241789048575
AUPRC: 0.7050677161350646
Sensitivity: 0.7563451776649747
Specificity: 0.7672292545710268
Threshold: 0.16
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.58it/s]
Loss: 0.5152567758205089
AUROC: 0.830045985190794
AUPRC: 0.6801213268679731
Sensitivity: 0.7541507024265645
Specificity: 0.7541171088746569
Threshold: 0.15
Accuracy:  0.7541259683395083

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0021.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.5898239109665155
AUROC: 0.8362200470727104
AUPRC: 0.7035215592420925
Sensitivity: 0.7648054145516074
Specificity: 0.7510548523206751
Threshold: 0.12
Accuracy:  0.7550919026328863

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.59it/s]
Loss: 0.5535646917972159
AUROC: 0.8270253406386164
AUPRC: 0.6749243713222075
Sensitivity: 0.7432950191570882
Specificity: 0.7662397072278133
Threshold: 0.12
Accuracy:  0.7601886156955204

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0022.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5569506026804447
AUROC: 0.8356691202543545
AUPRC: 0.7025673340796905
Sensitivity: 0.754653130287648
Specificity: 0.7672292545710268
Threshold: 0.16
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.5222853162187211
AUROC: 0.8268877531347165
AUPRC: 0.6762758106801334
Sensitivity: 0.7509578544061303
Specificity: 0.7538883806038427
Threshold: 0.15
Accuracy:  0.7531155271135063

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0023.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
Loss: 0.5285935085266829
AUROC: 0.8377776349889696
AUPRC: 0.7049241747367707
Sensitivity: 0.7563451776649747
Specificity: 0.7728551336146273
Threshold: 0.18
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.54it/s]
Loss: 0.4973775367153452
AUROC: 0.8292843609454804
AUPRC: 0.6789883504437347
Sensitivity: 0.7535121328224776
Specificity: 0.755032021957914
Threshold: 0.17
Accuracy:  0.7546311889525092

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0024.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.5276791006326675
AUROC: 0.8370744001085194
AUPRC: 0.7043615626385147
Sensitivity: 0.7681895093062606
Specificity: 0.7510548523206751
Threshold: 0.17
Accuracy:  0.7560854446100348

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.49815023864837404
AUROC: 0.8287298482506229
AUPRC: 0.679596654634226
Sensitivity: 0.7592592592592593
Specificity: 0.7438243366880146
Threshold: 0.16
Accuracy:  0.7478949141124958

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0025.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5921825524419546
AUROC: 0.8368578370827293
AUPRC: 0.7034324799858983
Sensitivity: 0.7563451776649747
Specificity: 0.7672292545710268
Threshold: 0.12
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.57it/s]
Loss: 0.5549822459195523
AUROC: 0.8277887906204466
AUPRC: 0.6736281392885639
Sensitivity: 0.756066411238825
Specificity: 0.7497712717291857
Threshold: 0.11
Accuracy:  0.7514314584035029

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0026.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
Loss: 0.550725918263197
AUROC: 0.8380263254966076
AUPRC: 0.7041982396601066
Sensitivity: 0.7580372250423012
Specificity: 0.7714486638537271
Threshold: 0.15
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.53it/s]
Loss: 0.51760892157859
AUROC: 0.8292598960761564
AUPRC: 0.6763946353942181
Sensitivity: 0.7522349936143039
Specificity: 0.7566331198536139
Threshold: 0.14
Accuracy:  0.755473223307511

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0027.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.543362507596612
AUROC: 0.8383594993824385
AUPRC: 0.7050299625395844
Sensitivity: 0.766497461928934
Specificity: 0.7531645569620253
Threshold: 0.16
Accuracy:  0.7570789865871833

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.51it/s]
Loss: 0.507772649222232
AUROC: 0.8299786520280572
AUPRC: 0.6790703846780587
Sensitivity: 0.7630906768837803
Specificity: 0.7435956084172004
Threshold: 0.15
Accuracy:  0.7487369484674975

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0028.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5283217411488295
AUROC: 0.8371267560048643
AUPRC: 0.7054024143434982
Sensitivity: 0.754653130287648
Specificity: 0.7735583684950773
Threshold: 0.18
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.49711484224238295
AUROC: 0.8291545875938722
AUPRC: 0.6794461986417142
Sensitivity: 0.7490421455938697
Specificity: 0.7586916742909423
Threshold: 0.17
Accuracy:  0.7561468507915123

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0029.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.5109219998121262
AUROC: 0.8412747708834581
AUPRC: 0.7076690260869987
Sensitivity: 0.7563451776649747
Specificity: 0.7728551336146273
Threshold: 0.19
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.4834793378063973
AUROC: 0.8325488508668304
AUPRC: 0.683659745469783
Sensitivity: 0.7522349936143039
Specificity: 0.7586916742909423
Threshold: 0.18
Accuracy:  0.756988885146514

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0030.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5170892179012299
AUROC: 0.8367852527718878
AUPRC: 0.7047715122667335
Sensitivity: 0.7580372250423012
Specificity: 0.7658227848101266
Threshold: 0.18
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.49081542992845495
AUROC: 0.8289876422467835
AUPRC: 0.6789789129219667
Sensitivity: 0.7535121328224776
Specificity: 0.7527447392497713
Threshold: 0.17
Accuracy:  0.7529471202425059

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0031.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
Loss: 0.4666248168796301
AUROC: 0.840460874676643
AUPRC: 0.7066221123609067
Sensitivity: 0.7631133671742809
Specificity: 0.7693389592123769
Threshold: 0.26
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.51it/s]
Loss: 0.4477738763423676
AUROC: 0.8321233081995142
AUPRC: 0.6832629503263878
Sensitivity: 0.7675606641123882
Specificity: 0.744053064958829
Threshold: 0.25
Accuracy:  0.7502526103065005

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0032.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.49430943466722965
AUROC: 0.8414853843755727
AUPRC: 0.7080042381293726
Sensitivity: 0.7631133671742809
Specificity: 0.7651195499296765
Threshold: 0.19
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.48it/s]
Loss: 0.46814992643417197
AUROC: 0.8335642524879676
AUPRC: 0.6828418358910633
Sensitivity: 0.7618135376756067
Specificity: 0.7532021957913998
Threshold: 0.18
Accuracy:  0.755473223307511

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0033.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.5363113358616829
AUROC: 0.8380013374551702
AUPRC: 0.7062123737704438
Sensitivity: 0.766497461928934
Specificity: 0.7566807313642757
Threshold: 0.15
Accuracy:  0.7595628415300546

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.5047734240268139
AUROC: 0.8300663604103204
AUPRC: 0.6788326584415603
Sensitivity: 0.7598978288633461
Specificity: 0.7490850869167429
Threshold: 0.14
Accuracy:  0.7519366790165039

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0034.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.4997611902654171
AUROC: 0.8362735928757903
AUPRC: 0.7041790179375239
Sensitivity: 0.754653130287648
Specificity: 0.7735583684950773
Threshold: 0.19
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.51it/s]
Loss: 0.47399168572527295
AUROC: 0.8290002033140185
AUPRC: 0.6778275297054758
Sensitivity: 0.7490421455938697
Specificity: 0.7564043915827996
Threshold: 0.18
Accuracy:  0.754462782081509

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0035.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.5053051505237818
AUROC: 0.8417031373080979
AUPRC: 0.7070685245310808
Sensitivity: 0.7580372250423012
Specificity: 0.7721518987341772
Threshold: 0.19
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.51it/s]
Loss: 0.47961032485708277
AUROC: 0.8331620062186048
AUPRC: 0.6834048560573776
Sensitivity: 0.7573435504469987
Specificity: 0.7561756633119854
Threshold: 0.18
Accuracy:  0.756483664533513

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0036.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.49332781322300434
AUROC: 0.8383345113410011
AUPRC: 0.7052806958807664
Sensitivity: 0.7563451776649747
Specificity: 0.7679324894514767
Threshold: 0.2
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.46899270028509993
AUROC: 0.8307048569849466
AUPRC: 0.6802744685496546
Sensitivity: 0.7528735632183908
Specificity: 0.7561756633119854
Threshold: 0.19
Accuracy:  0.7553048164365106

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0037.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.508649405092001
AUROC: 0.84091660895619
AUPRC: 0.7062360374419487
Sensitivity: 0.7563451776649747
Specificity: 0.7728551336146273
Threshold: 0.19
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.48267167774920766
AUROC: 0.8326713943018325
AUPRC: 0.6814412271513172
Sensitivity: 0.7541507024265645
Specificity: 0.7580054894784996
Threshold: 0.18
Accuracy:  0.756988885146514

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0038.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5094856638461351
AUROC: 0.8388818684391517
AUPRC: 0.7044865085514967
Sensitivity: 0.751269035532995
Specificity: 0.7693389592123769
Threshold: 0.18
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.4821391983869228
AUROC: 0.8309760884018701
AUPRC: 0.6793257115656551
Sensitivity: 0.7509578544061303
Specificity: 0.7628087831655993
Threshold: 0.17
Accuracy:  0.7596833950825194

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0039.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5115211009979248
AUROC: 0.8447552480836552
AUPRC: 0.7059207105045948
Sensitivity: 0.766497461928934
Specificity: 0.7559774964838256
Threshold: 0.18
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.48672875698576584
AUROC: 0.8344092033479042
AUPRC: 0.6828309696527091
Sensitivity: 0.7630906768837803
Specificity: 0.7456541628545288
Threshold: 0.17
Accuracy:  0.7502526103065005

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0040.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.5558444410562515
AUROC: 0.8409737244794752
AUPRC: 0.7062670051392486
Sensitivity: 0.7681895093062606
Specificity: 0.7552742616033755
Threshold: 0.13
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.5224271930278616
AUROC: 0.8322932477544902
AUPRC: 0.6769350919030956
Sensitivity: 0.7528735632183908
Specificity: 0.7634949679780421
Threshold: 0.13
Accuracy:  0.7606938363085214

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0041.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.47508005797863007
AUROC: 0.8452205016170833
AUPRC: 0.7067869682981625
Sensitivity: 0.7614213197969543
Specificity: 0.7665260196905767
Threshold: 0.23
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.454404871831549
AUROC: 0.8356379970531153
AUPRC: 0.6853610805689515
Sensitivity: 0.7694763729246488
Specificity: 0.7422232387923148
Threshold: 0.22
Accuracy:  0.7494105759514988

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0042.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5271133966743946
AUROC: 0.8401752970602164
AUPRC: 0.7069049631122603
Sensitivity: 0.7715736040609137
Specificity: 0.7538677918424754
Threshold: 0.14
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.4974592997038618
AUROC: 0.8321073147476279
AUPRC: 0.6794951327461125
Sensitivity: 0.7515964240102171
Specificity: 0.765096065873742
Threshold: 0.14
Accuracy:  0.761535870663523

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0043.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.4523321595042944
AUROC: 0.8414508770802545
AUPRC: 0.7067338897637775
Sensitivity: 0.7698815566835872
Specificity: 0.7552742616033755
Threshold: 0.23
Accuracy:  0.7595628415300546

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.43426344781479936
AUROC: 0.8323956350583477
AUPRC: 0.6818582342308495
Sensitivity: 0.7528735632183908
Specificity: 0.7609789569990851
Threshold: 0.23
Accuracy:  0.7588413607275177

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0044.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.5504965372383595
AUROC: 0.8371826816214145
AUPRC: 0.7044929354192526
Sensitivity: 0.7563451776649747
Specificity: 0.770745428973277
Threshold: 0.13
Accuracy:  0.7665176353700944

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.5184350235665098
AUROC: 0.8288800698512186
AUPRC: 0.675040826195875
Sensitivity: 0.756066411238825
Specificity: 0.7573193046660567
Threshold: 0.12
Accuracy:  0.756988885146514

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0045.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5162941012531519
AUROC: 0.8443209321253401
AUPRC: 0.7098141779405276
Sensitivity: 0.7597292724196277
Specificity: 0.7679324894514767
Threshold: 0.16
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.48407063046668436
AUROC: 0.834999281390107
AUPRC: 0.6813118217869141
Sensitivity: 0.7605363984674329
Specificity: 0.7568618481244281
Threshold: 0.15
Accuracy:  0.7578309195015157

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0046.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.4825645685195923
AUROC: 0.8410284601892903
AUPRC: 0.7071670263513283
Sensitivity: 0.7732656514382402
Specificity: 0.7573839662447257
Threshold: 0.19
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.48it/s]
Loss: 0.4586849161919127
AUROC: 0.832664894679833
AUPRC: 0.6810402223212074
Sensitivity: 0.764367816091954
Specificity: 0.7451967063129002
Threshold: 0.18
Accuracy:  0.7502526103065005

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0047.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.460966382175684
AUROC: 0.8473766126211029
AUPRC: 0.7117804902553801
Sensitivity: 0.7563451776649747
Specificity: 0.7756680731364276
Threshold: 0.25
Accuracy:  0.7699950322901142

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.4416118153232209
AUROC: 0.8373198655323146
AUPRC: 0.6864324542170773
Sensitivity: 0.764367816091954
Specificity: 0.7467978042086002
Threshold: 0.24
Accuracy:  0.7514314584035029

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0048.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.4731300622224808
AUROC: 0.8406357909667042
AUPRC: 0.7086216710178065
Sensitivity: 0.766497461928934
Specificity: 0.7609001406469761
Threshold: 0.22
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.44919828151134733
AUROC: 0.833101099648407
AUPRC: 0.6824691900492221
Sensitivity: 0.7490421455938697
Specificity: 0.7657822506861848
Threshold: 0.22
Accuracy:  0.7613674637925227

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0049.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.4830605871975422
AUROC: 0.8402978574539327
AUPRC: 0.708782351651913
Sensitivity: 0.7614213197969543
Specificity: 0.7651195499296765
Threshold: 0.2
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.45590098963138903
AUROC: 0.8324454411505238
AUPRC: 0.6813108768113137
Sensitivity: 0.7554278416347382
Specificity: 0.7541171088746569
Threshold: 0.19
Accuracy:  0.754462782081509

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0050.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.4707892872393131
AUROC: 0.8438378299908854
AUPRC: 0.7122158801805264
Sensitivity: 0.766497461928934
Specificity: 0.7623066104078763
Threshold: 0.21
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.4458308413307718
AUROC: 0.8355523334957509
AUPRC: 0.6850499827374373
Sensitivity: 0.7611749680715197
Specificity: 0.7497712717291857
Threshold: 0.2
Accuracy:  0.7527787133715056

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0051.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
Loss: 0.4787202253937721
AUROC: 0.8393970980554544
AUPRC: 0.7061351114722062
Sensitivity: 0.7563451776649747
Specificity: 0.770745428973277
Threshold: 0.18
Accuracy:  0.7665176353700944

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.45405019566099697
AUROC: 0.8321906413622506
AUPRC: 0.67917985329206
Sensitivity: 0.7637292464878672
Specificity: 0.7509149130832571
Threshold: 0.16
Accuracy:  0.7542943752105086

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0052.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.45823846384882927
AUROC: 0.8476026949007738
AUPRC: 0.7124066602566757
Sensitivity: 0.7749576988155669
Specificity: 0.750351617440225
Threshold: 0.23
Accuracy:  0.7575757575757576

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.43945177977389477
AUROC: 0.8365702911480115
AUPRC: 0.6871233015612112
Sensitivity: 0.7509578544061303
Specificity: 0.7614364135407137
Threshold: 0.23
Accuracy:  0.7586729538565173

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0053.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5477554351091385
AUROC: 0.8495327236251222
AUPRC: 0.7165428691759295
Sensitivity: 0.7529610829103215
Specificity: 0.7777777777777778
Threshold: 0.13
Accuracy:  0.7704918032786885

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.51it/s]
Loss: 0.5124926592441316
AUROC: 0.838547271677773
AUPRC: 0.6832273063027918
Sensitivity: 0.7579821200510856
Specificity: 0.7621225983531564
Threshold: 0.12
Accuracy:  0.7610306500505221

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0054.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.48119743540883064
AUROC: 0.8456702863629549
AUPRC: 0.7134595164317598
Sensitivity: 0.7715736040609137
Specificity: 0.7566807313642757
Threshold: 0.2
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.4549684832070736
AUROC: 0.8357791630005877
AUPRC: 0.6847687497253426
Sensitivity: 0.7515964240102171
Specificity: 0.7607502287282708
Threshold: 0.2
Accuracy:  0.7583361401145167

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0055.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
Loss: 0.4818414133042097
AUROC: 0.8426372140951592
AUPRC: 0.7093267604005528
Sensitivity: 0.7614213197969543
Specificity: 0.7630098452883263
Threshold: 0.17
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.49it/s]
Loss: 0.45575070127527767
AUROC: 0.8337885989911418
AUPRC: 0.6826340858006883
Sensitivity: 0.7592592592592593
Specificity: 0.7573193046660567
Threshold: 0.16
Accuracy:  0.7578309195015157

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0056.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.485957071185112
AUROC: 0.8461331600829127
AUPRC: 0.7130503502748095
Sensitivity: 0.7597292724196277
Specificity: 0.7644163150492265
Threshold: 0.19
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.45958390166150764
AUROC: 0.8366471911700957
AUPRC: 0.6849248894321829
Sensitivity: 0.7586206896551724
Specificity: 0.7522872827081427
Threshold: 0.18
Accuracy:  0.7539575614685079

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0057.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.22it/s]
Loss: 0.4514963813126087
AUROC: 0.8452597685393419
AUPRC: 0.7114039654306061
Sensitivity: 0.7631133671742809
Specificity: 0.7658227848101266
Threshold: 0.24
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.43181566513599234
AUROC: 0.835159508026814
AUPRC: 0.6824081291521847
Sensitivity: 0.7650063856960408
Specificity: 0.7447392497712717
Threshold: 0.23
Accuracy:  0.7500842034355002

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0058.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
Loss: 0.48708572052419186
AUROC: 0.8502329837387346
AUPRC: 0.7167476389764316
Sensitivity: 0.7681895093062606
Specificity: 0.7630098452883263
Threshold: 0.19
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.4635770007016811
AUROC: 0.839273111487359
AUPRC: 0.6849898444892246
Sensitivity: 0.764367816091954
Specificity: 0.7486276303751144
Threshold: 0.18
Accuracy:  0.7527787133715056

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0059.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.5357396360486746
AUROC: 0.8411593499301524
AUPRC: 0.7083894648249118
Sensitivity: 0.7648054145516074
Specificity: 0.760196905766526
Threshold: 0.11
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.5038037135245952
AUROC: 0.8323510870873396
AUPRC: 0.6776792865863801
Sensitivity: 0.7701149425287356
Specificity: 0.7463403476669717
Threshold: 0.1
Accuracy:  0.7526103065005052

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0060.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.48it/s]
Loss: 0.4942880980670452
AUROC: 0.842666961763537
AUPRC: 0.7104536241623653
Sensitivity: 0.7580372250423012
Specificity: 0.7587904360056259
Threshold: 0.16
Accuracy:  0.7585692995529061

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.48it/s]
Loss: 0.4681247292046851
AUROC: 0.8342016536206838
AUPRC: 0.6814869682640969
Sensitivity: 0.756066411238825
Specificity: 0.7543458371454712
Threshold: 0.15
Accuracy:  0.7547995958235096

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0061.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5198579914867878
AUROC: 0.8437991580219942
AUPRC: 0.7112947173556846
Sensitivity: 0.7563451776649747
Specificity: 0.7679324894514767
Threshold: 0.15
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.4886336044428196
AUROC: 0.8354075160752449
AUPRC: 0.6817570503517405
Sensitivity: 0.7515964240102171
Specificity: 0.7607502287282708
Threshold: 0.14
Accuracy:  0.7583361401145167

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0062.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.47950394079089165
AUROC: 0.8429204118981154
AUPRC: 0.7114403424662427
Sensitivity: 0.7631133671742809
Specificity: 0.7623066104078763
Threshold: 0.19
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.4554876146164346
AUROC: 0.8335492814485306
AUPRC: 0.6800904440812454
Sensitivity: 0.7509578544061303
Specificity: 0.7605215004574566
Threshold: 0.18
Accuracy:  0.757999326372516

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0063.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.4991831202059984
AUROC: 0.8466709979271826
AUPRC: 0.7144976747263125
Sensitivity: 0.7648054145516074
Specificity: 0.7573839662447257
Threshold: 0.15
Accuracy:  0.7595628415300546

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.51it/s]
Loss: 0.4707515883318921
AUROC: 0.8372377073890624
AUPRC: 0.6859888379746734
Sensitivity: 0.7522349936143039
Specificity: 0.7673833485818847
Threshold: 0.15
Accuracy:  0.7633883462445268

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0064.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.4760848470032215
AUROC: 0.8466115025904271
AUPRC: 0.7123826699568037
Sensitivity: 0.766497461928934
Specificity: 0.7616033755274262
Threshold: 0.2
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.48it/s]
Loss: 0.4534674499263155
AUROC: 0.83651770993633
AUPRC: 0.6829970304208006
Sensitivity: 0.7586206896551724
Specificity: 0.7490850869167429
Threshold: 0.19
Accuracy:  0.7515998652745032

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0065.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.4770146273076534
AUROC: 0.8455370168086225
AUPRC: 0.7129038810215576
Sensitivity: 0.7580372250423012
Specificity: 0.770745428973277
Threshold: 0.21
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.51it/s]
Loss: 0.452826477745746
AUROC: 0.8356651640124839
AUPRC: 0.6835442080102399
Sensitivity: 0.7394636015325671
Specificity: 0.7708142726440989
Threshold: 0.21
Accuracy:  0.762546311889525

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0066.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.5874015092849731
AUROC: 0.8421672009347908
AUPRC: 0.7115620073249723
Sensitivity: 0.7529610829103215
Specificity: 0.7834036568213784
Threshold: 0.08
Accuracy:  0.7744659711872827

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.5481905328466538
AUROC: 0.8337799815147829
AUPRC: 0.6811494065433241
Sensitivity: 0.7650063856960408
Specificity: 0.7534309240622141
Threshold: 0.07
Accuracy:  0.756483664533513

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0067.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.47250075452029705
AUROC: 0.8518506619451167
AUPRC: 0.7170033795579679
Sensitivity: 0.7614213197969543
Specificity: 0.7686357243319268
Threshold: 0.22
Accuracy:  0.7665176353700944

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.51it/s]
Loss: 0.4549822984857762
AUROC: 0.8375906587724742
AUPRC: 0.6857187571078114
Sensitivity: 0.7611749680715197
Specificity: 0.7557182067703568
Threshold: 0.21
Accuracy:  0.7571572920175144

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0068.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
Loss: 0.4809399265795946
AUROC: 0.847790700164921
AUPRC: 0.7134278560930908
Sensitivity: 0.7495769881556683
Specificity: 0.7827004219409283
Threshold: 0.23
Accuracy:  0.7729756582215599

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.4590376983297632
AUROC: 0.8378075562706598
AUPRC: 0.6832843584877448
Sensitivity: 0.7713920817369093
Specificity: 0.7392497712717292
Threshold: 0.22
Accuracy:  0.7477265072414955

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0069.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.5317956395447254
AUROC: 0.8420208424063722
AUPRC: 0.7105294982618073
Sensitivity: 0.766497461928934
Specificity: 0.7573839662447257
Threshold: 0.1
Accuracy:  0.7600596125186289

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.49it/s]
Loss: 0.4997215800462885
AUROC: 0.8338622126874958
AUPRC: 0.6803804984130863
Sensitivity: 0.7490421455938697
Specificity: 0.7646386093321135
Threshold: 0.1
Accuracy:  0.7605254294375211

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0070.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.538248946890235
AUROC: 0.8516293392923863
AUPRC: 0.717704112004119
Sensitivity: 0.7715736040609137
Specificity: 0.7616033755274262
Threshold: 0.11
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.5091968730409094
AUROC: 0.8390576745783864
AUPRC: 0.6842803622879436
Sensitivity: 0.7630906768837803
Specificity: 0.7548032936870998
Threshold: 0.1
Accuracy:  0.756988885146514

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0071.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5690115727484226
AUROC: 0.8381048593411248
AUPRC: 0.7067249739149513
Sensitivity: 0.7817258883248731
Specificity: 0.7559774964838256
Threshold: 0.09
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.5246939747891528
AUROC: 0.8312782112806565
AUPRC: 0.6773842616462861
Sensitivity: 0.7547892720306514
Specificity: 0.7552607502287283
Threshold: 0.09
Accuracy:  0.7551364095655103

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0072.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.5323373451828957
AUROC: 0.842816890012161
AUPRC: 0.7121506618011565
Sensitivity: 0.7732656514382402
Specificity: 0.7552742616033755
Threshold: 0.1
Accuracy:  0.7605563835072032

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.4977524955856039
AUROC: 0.8346559698955035
AUPRC: 0.684048182871155
Sensitivity: 0.756066411238825
Specificity: 0.7602927721866423
Threshold: 0.1
Accuracy:  0.7591781744695184

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0073.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.46761574037373066
AUROC: 0.8424396895771309
AUPRC: 0.7109082306450057
Sensitivity: 0.7614213197969543
Specificity: 0.7644163150492265
Threshold: 0.19
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.444743535303055
AUROC: 0.8338576848609345
AUPRC: 0.6820225753337353
Sensitivity: 0.7528735632183908
Specificity: 0.7543458371454712
Threshold: 0.18
Accuracy:  0.7539575614685079

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0074.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.4640922900289297
AUROC: 0.8455013196065692
AUPRC: 0.7111601998275949
Sensitivity: 0.7597292724196277
Specificity: 0.770042194092827
Threshold: 0.19
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.44467993904935554
AUROC: 0.8359263173638352
AUPRC: 0.6850455592686411
Sensitivity: 0.7637292464878672
Specificity: 0.7511436413540714
Threshold: 0.17
Accuracy:  0.754462782081509

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0075.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.47241545654833317
AUROC: 0.8458142650779032
AUPRC: 0.7126723452598469
Sensitivity: 0.7648054145516074
Specificity: 0.7658227848101266
Threshold: 0.23
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.52it/s]
Loss: 0.44879174517824294
AUROC: 0.8356212002771615
AUPRC: 0.6825644293781369
Sensitivity: 0.7598978288633461
Specificity: 0.7472552607502287
Threshold: 0.22
Accuracy:  0.7505894240485012

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0076.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
Loss: 0.4766403529793024
AUROC: 0.8488080704234403
AUPRC: 0.7146481498829892
Sensitivity: 0.7597292724196277
Specificity: 0.7651195499296765
Threshold: 0.21
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.50it/s]
Loss: 0.4517756123492058
AUROC: 0.8389490067409113
AUPRC: 0.6831228637292677
Sensitivity: 0.7413793103448276
Specificity: 0.7703568161024703
Threshold: 0.21
Accuracy:  0.7627147187605254

Intermediate Model:
  ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0077.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
Loss: 0.5707092210650444
AUROC: 0.8451015109435722
AUPRC: 0.7135106746613402
Sensitivity: 0.7681895093062606
Specificity: 0.759493670886076
Threshold: 0.09
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.51it/s]
Loss: 0.5343536503137426
AUROC: 0.8343366120639996
AUPRC: 0.6802576781426042
Sensitivity: 0.7535121328224776
Specificity: 0.7660109789569991
Threshold: 0.09
Accuracy:  0.7627147187605254


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:      57, 0.4499
  Epoch with best model Test AUROC:     58, 0.8393
  Epoch with best model Test Accuracy:   3, 0.7678

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:   57, 0.4499
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0057.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.43181566513599234
AUROC: 0.835159508026814
AUPRC: 0.6824081291521847
Sensitivity: 0.7650063856960408
Specificity: 0.7447392497712717
Threshold: 0.23
Accuracy:  0.7500842034355002
best_model_val_test_auroc: 0.835159508026814
best_model_val_test_auprc: 0.6824081291521847

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:  58, 0.8393
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/ABP_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_37bac32c_0058.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.40it/s]
Loss: 0.4635770007016811
AUROC: 0.839273111487359
AUPRC: 0.6849898444892246
Sensitivity: 0.764367816091954
Specificity: 0.7486276303751144
Threshold: 0.18
Accuracy:  0.7527787133715056
best_model_auroc_test_auroc: 0.839273111487359
best_model_auroc_test_auprc: 0.6849898444892246

Total Processing Time: 6876.3920 sec
Experiment Setup
  name:              ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           True
  use_eeg:           True
  use_ecg:           False
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        200
  patience:          20
  device:            mps

Model Architecture
HypotensionCNN(
  (abpResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (abpFc): Linear(in_features=2814, out_features=32, bias=True)
  (eegResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
  )
  (eegFc): Linear(in_features=720, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=64, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:50<00:00,  1.83it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 16:48:14.395705] Completed epoch 0 with training loss 0.51595634, validation loss 0.56930101
Validation loss improved to 0.56930101. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.57it/s]
[2024-05-05 16:49:07.069338] Completed epoch 1 with training loss 0.43843865, validation loss 0.60136664
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 16:49:59.492944] Completed epoch 2 with training loss 0.43609428, validation loss 0.62352270
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.97it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 16:50:52.286923] Completed epoch 3 with training loss 0.43331781, validation loss 0.61657584
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 16:51:44.661035] Completed epoch 4 with training loss 0.43372247, validation loss 0.60639501
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 16:52:36.879306] Completed epoch 5 with training loss 0.43268639, validation loss 0.65768874
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.36it/s]
[2024-05-05 16:53:29.710568] Completed epoch 6 with training loss 0.43122545, validation loss 0.62903214
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 16:54:21.926765] Completed epoch 7 with training loss 0.43106586, validation loss 0.61191785
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 16:55:14.019004] Completed epoch 8 with training loss 0.43007848, validation loss 0.63635039
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 16:56:06.360358] Completed epoch 9 with training loss 0.43108782, validation loss 0.60573196
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 16:56:58.517799] Completed epoch 10 with training loss 0.42950386, validation loss 0.58461857
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 16:57:50.943426] Completed epoch 11 with training loss 0.43028077, validation loss 0.56705523
Validation loss improved to 0.56705523. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.97it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 16:58:43.893502] Completed epoch 12 with training loss 0.43070382, validation loss 0.57275963
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.97it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
[2024-05-05 16:59:36.801871] Completed epoch 13 with training loss 0.43025723, validation loss 0.54179633
Validation loss improved to 0.54179633. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:00:28.984045] Completed epoch 14 with training loss 0.42799503, validation loss 0.57969934
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:01:21.319680] Completed epoch 15 with training loss 0.42701739, validation loss 0.59155512
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 17:02:13.967933] Completed epoch 16 with training loss 0.42744836, validation loss 0.55287564
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:03:06.275343] Completed epoch 17 with training loss 0.42606437, validation loss 0.57089710
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 17:03:58.849636] Completed epoch 18 with training loss 0.42602518, validation loss 0.52659851
Validation loss improved to 0.52659851. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 17:04:51.413189] Completed epoch 19 with training loss 0.42469499, validation loss 0.58481115
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 17:05:44.096428] Completed epoch 20 with training loss 0.42607078, validation loss 0.54867476
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:06:36.422511] Completed epoch 21 with training loss 0.42568353, validation loss 0.59193587
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-05 17:07:28.637107] Completed epoch 22 with training loss 0.42476803, validation loss 0.57307410
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:08:21.004156] Completed epoch 23 with training loss 0.42451820, validation loss 0.53372920
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:09:13.096333] Completed epoch 24 with training loss 0.42502645, validation loss 0.55740917
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:10:05.539488] Completed epoch 25 with training loss 0.42453331, validation loss 0.59450990
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 17:10:57.930842] Completed epoch 26 with training loss 0.42485833, validation loss 0.51523131
Validation loss improved to 0.51523131. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:11:50.504430] Completed epoch 27 with training loss 0.42520759, validation loss 0.51660085
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:12:42.847417] Completed epoch 28 with training loss 0.42406878, validation loss 0.54687643
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-05 17:13:35.428436] Completed epoch 29 with training loss 0.42302537, validation loss 0.57769775
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:14:27.783449] Completed epoch 30 with training loss 0.42454922, validation loss 0.48952073
Validation loss improved to 0.48952073. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:15:20.155395] Completed epoch 31 with training loss 0.42436194, validation loss 0.57961977
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:16:12.653415] Completed epoch 32 with training loss 0.42359078, validation loss 0.50430012
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 17:17:05.077041] Completed epoch 33 with training loss 0.42258993, validation loss 0.58980006
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:17:57.762919] Completed epoch 34 with training loss 0.42179567, validation loss 0.54051793
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:18:50.254212] Completed epoch 35 with training loss 0.42226112, validation loss 0.52962536
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.97it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:19:42.985964] Completed epoch 36 with training loss 0.42143568, validation loss 0.57934642
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 17:20:35.254013] Completed epoch 37 with training loss 0.42235324, validation loss 0.56949270
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:21:27.376514] Completed epoch 38 with training loss 0.42179438, validation loss 0.53510648
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:22:19.694849] Completed epoch 39 with training loss 0.42202631, validation loss 0.51151609
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:23:11.807302] Completed epoch 40 with training loss 0.42219085, validation loss 0.46962696
Validation loss improved to 0.46962696. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:24:04.177697] Completed epoch 41 with training loss 0.42003873, validation loss 0.48397124
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:24:56.318208] Completed epoch 42 with training loss 0.41965652, validation loss 0.49297124
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:25:48.634869] Completed epoch 43 with training loss 0.42182067, validation loss 0.53379601
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:26:40.795645] Completed epoch 44 with training loss 0.42196274, validation loss 0.45372486
Validation loss improved to 0.45372486. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 17:27:33.160624] Completed epoch 45 with training loss 0.42122436, validation loss 0.55125362
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:28:25.302199] Completed epoch 46 with training loss 0.42264581, validation loss 0.51392055
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:29:17.378902] Completed epoch 47 with training loss 0.42150050, validation loss 0.53601408
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
[2024-05-05 17:30:09.827173] Completed epoch 48 with training loss 0.42094296, validation loss 0.47534904
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:31:01.993397] Completed epoch 49 with training loss 0.42146531, validation loss 0.47148240
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.97it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:31:54.821070] Completed epoch 50 with training loss 0.42164052, validation loss 0.53064328
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:32:46.926053] Completed epoch 51 with training loss 0.42160490, validation loss 0.54143822
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:33:39.211566] Completed epoch 52 with training loss 0.42258918, validation loss 0.51714188
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:34:31.326865] Completed epoch 53 with training loss 0.42048326, validation loss 0.53007126
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:35:23.417724] Completed epoch 54 with training loss 0.42183486, validation loss 0.45353174
Validation loss improved to 0.45353174. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:36:15.801696] Completed epoch 55 with training loss 0.42164177, validation loss 0.56692195
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:37:07.885044] Completed epoch 56 with training loss 0.42033738, validation loss 0.51398033
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:38:00.117880] Completed epoch 57 with training loss 0.42225668, validation loss 0.49777091
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-05 17:38:52.158191] Completed epoch 58 with training loss 0.42171979, validation loss 0.48262027
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:39:44.547952] Completed epoch 59 with training loss 0.42007923, validation loss 0.47659588
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
[2024-05-05 17:40:36.757588] Completed epoch 60 with training loss 0.42042994, validation loss 0.51550865
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
[2024-05-05 17:41:29.006128] Completed epoch 61 with training loss 0.41976652, validation loss 0.49270201
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:42:21.255021] Completed epoch 62 with training loss 0.42108577, validation loss 0.53221953
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:43:13.363723] Completed epoch 63 with training loss 0.42162135, validation loss 0.44363013
Validation loss improved to 0.44363013. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:44:05.688404] Completed epoch 64 with training loss 0.42082286, validation loss 0.53076285
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:44:57.778502] Completed epoch 65 with training loss 0.41991246, validation loss 0.58819091
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.98it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:45:50.271375] Completed epoch 66 with training loss 0.42060289, validation loss 0.52410644
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:46:42.345835] Completed epoch 67 with training loss 0.41988003, validation loss 0.49362469
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
[2024-05-05 17:47:34.738435] Completed epoch 68 with training loss 0.42067471, validation loss 0.50348061
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:48:26.887444] Completed epoch 69 with training loss 0.42094615, validation loss 0.52611887
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:49:19.228160] Completed epoch 70 with training loss 0.41916093, validation loss 0.49924871
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:50:11.601973] Completed epoch 71 with training loss 0.41947845, validation loss 0.51205134
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:51:03.778835] Completed epoch 72 with training loss 0.42038840, validation loss 0.50286973
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:51:56.051857] Completed epoch 73 with training loss 0.42087385, validation loss 0.59875554
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.65it/s]
[2024-05-05 17:52:48.242129] Completed epoch 74 with training loss 0.42045289, validation loss 0.46322250
No improvement in validation loss. 11 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
[2024-05-05 17:53:40.742277] Completed epoch 75 with training loss 0.41954413, validation loss 0.45367849
No improvement in validation loss. 12 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:54:33.040498] Completed epoch 76 with training loss 0.41996065, validation loss 0.51681632
No improvement in validation loss. 13 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
[2024-05-05 17:55:25.347609] Completed epoch 77 with training loss 0.42070824, validation loss 0.50482982
No improvement in validation loss. 14 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  1.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
[2024-05-05 17:56:17.758039] Completed epoch 78 with training loss 0.41929317, validation loss 0.59539324
No improvement in validation loss. 15 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:57:09.814641] Completed epoch 79 with training loss 0.42098191, validation loss 0.53923905
No improvement in validation loss. 16 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:46<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
[2024-05-05 17:58:02.031118] Completed epoch 80 with training loss 0.42061210, validation loss 0.47510773
No improvement in validation loss. 17 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.01it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-05 17:58:53.907948] Completed epoch 81 with training loss 0.41841865, validation loss 0.47082710
No improvement in validation loss. 18 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.00it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-05 17:59:45.990407] Completed epoch 82 with training loss 0.42090458, validation loss 0.50954586
No improvement in validation loss. 19 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:45<00:00,  2.01it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.66it/s]
[2024-05-05 18:00:37.841607] Completed epoch 83 with training loss 0.42100099, validation loss 0.47490269
No improvement in validation loss. 20 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:   63, 0.4436
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.91it/s]
Loss: 0.5682963468134403
AUROC: 0.8403264152155754
AUPRC: 0.6933219906969287
Sensitivity: 0.7834179357021996
Specificity: 0.7482419127988749
Threshold: 0.18
Accuracy:  0.7585692995529061

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.59it/s]
Loss: 0.5324673633626167
AUROC: 0.8249584608427717
AUPRC: 0.6644456122435107
Sensitivity: 0.7452107279693486
Specificity: 0.760064043915828
Threshold: 0.18
Accuracy:  0.7561468507915123

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5981034263968468
AUROC: 0.8424730069657139
AUPRC: 0.7033719471918801
Sensitivity: 0.7817258883248731
Specificity: 0.7609001406469761
Threshold: 0.14
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5570537711077548
AUROC: 0.8300342274476262
AUPRC: 0.6787388127728637
Sensitivity: 0.7362707535121328
Specificity: 0.7717291857273559
Threshold: 0.14
Accuracy:  0.7623779050185248

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.6243751086294651
AUROC: 0.8417911904064959
AUPRC: 0.7057919897638172
Sensitivity: 0.766497461928934
Specificity: 0.7686357243319268
Threshold: 0.12
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5776290874531929
AUROC: 0.8309666676014438
AUPRC: 0.6820077139241164
Sensitivity: 0.7349936143039592
Specificity: 0.777676120768527
Threshold: 0.12
Accuracy:  0.7664196699225329

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.61485705524683
AUROC: 0.8415270311113016
AUPRC: 0.7054706413351755
Sensitivity: 0.7868020304568528
Specificity: 0.7447257383966245
Threshold: 0.12
Accuracy:  0.7570789865871833

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5695577520639339
AUROC: 0.8315512684340965
AUPRC: 0.6825125345305049
Sensitivity: 0.7567049808429118
Specificity: 0.7566331198536139
Threshold: 0.12
Accuracy:  0.7566520714045133

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.6065035965293646
AUROC: 0.8414246991320821
AUPRC: 0.7053736243833442
Sensitivity: 0.7428087986463621
Specificity: 0.790436005625879
Threshold: 0.13
Accuracy:  0.7764530551415797

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5638686291714932
AUROC: 0.8318704071772185
AUPRC: 0.6831637841875644
Sensitivity: 0.7681992337164751
Specificity: 0.7445105215004575
Threshold: 0.12
Accuracy:  0.7507578309195015

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.6521109193563461
AUROC: 0.8408809117541367
AUPRC: 0.7053618846602976
Sensitivity: 0.7445008460236887
Specificity: 0.7883263009845288
Threshold: 0.1
Accuracy:  0.7754595131644312

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.68it/s]
Loss: 0.603412926831144
AUROC: 0.8314630488456087
AUPRC: 0.6831520452819876
Sensitivity: 0.776500638569604
Specificity: 0.7351326623970723
Threshold: 0.09
Accuracy:  0.7460424385314921

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.6271960791200399
AUROC: 0.8405548773087167
AUPRC: 0.7043835449455111
Sensitivity: 0.7428087986463621
Specificity: 0.7883263009845288
Threshold: 0.11
Accuracy:  0.7749627421758569

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.583129051200887
AUROC: 0.8312619257109272
AUPRC: 0.6827109013297561
Sensitivity: 0.7681992337164751
Specificity: 0.7433668801463861
Threshold: 0.1
Accuracy:  0.7499157965644998

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.6084481198340654
AUROC: 0.8399753927287178
AUPRC: 0.7029324447615356
Sensitivity: 0.7563451776649747
Specificity: 0.7679324894514767
Threshold: 0.12
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.567634845984743
AUROC: 0.8306746958176903
AUPRC: 0.6818030585016958
Sensitivity: 0.7369093231162197
Specificity: 0.7740164684354987
Threshold: 0.12
Accuracy:  0.7642303805995284

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.6386548280715942
AUROC: 0.8390734434235044
AUPRC: 0.7024584473576663
Sensitivity: 0.7783417935702199
Specificity: 0.750351617440225
Threshold: 0.1
Accuracy:  0.7585692995529061

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5896699694876976
AUROC: 0.8300902410439591
AUPRC: 0.6810602177304393
Sensitivity: 0.7528735632183908
Specificity: 0.7570905763952425
Threshold: 0.1
Accuracy:  0.7559784439205119

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0009.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.6050464473664761
AUROC: 0.8389734912577552
AUPRC: 0.7018457332230477
Sensitivity: 0.751269035532995
Specificity: 0.7735583684950773
Threshold: 0.12
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5626198503565281
AUROC: 0.829967040343811
AUPRC: 0.6809795496744737
Sensitivity: 0.7343550446998723
Specificity: 0.7779048490393413
Threshold: 0.12
Accuracy:  0.7664196699225329

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0010.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5876003950834274
AUROC: 0.8388556904909794
AUPRC: 0.7008988986135467
Sensitivity: 0.7766497461928934
Specificity: 0.7517580872011251
Threshold: 0.13
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.544724758635176
AUROC: 0.8297468565198949
AUPRC: 0.6805131080377691
Sensitivity: 0.7515964240102171
Specificity: 0.7609789569990851
Threshold: 0.13
Accuracy:  0.7585045469855171

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0011.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.566127635538578
AUROC: 0.8387973850609589
AUPRC: 0.7008953543338943
Sensitivity: 0.766497461928934
Specificity: 0.7637130801687764
Threshold: 0.15
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.5274264568344076
AUROC: 0.8299033586541079
AUPRC: 0.6811755809504066
Sensitivity: 0.7401021711366539
Specificity: 0.7710430009149131
Threshold: 0.15
Accuracy:  0.7628831256315257

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0012.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5755755715072155
AUROC: 0.8384427928538962
AUPRC: 0.7019414846378599
Sensitivity: 0.7495769881556683
Specificity: 0.7812939521800282
Threshold: 0.14
Accuracy:  0.7719821162444114

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5361116807511512
AUROC: 0.8294142803560098
AUPRC: 0.6808671783554594
Sensitivity: 0.7624521072796935
Specificity: 0.7481701738334858
Threshold: 0.13
Accuracy:  0.7519366790165039

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0013.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.542388554662466
AUROC: 0.838819993288926
AUPRC: 0.7003715813974222
Sensitivity: 0.7715736040609137
Specificity: 0.7566807313642757
Threshold: 0.16
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.68it/s]
Loss: 0.5078507030897952
AUROC: 0.8301864208436597
AUPRC: 0.6819491634688861
Sensitivity: 0.7490421455938697
Specificity: 0.7639524245196706
Threshold: 0.16
Accuracy:  0.76002020882452

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0014.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5794648732990026
AUROC: 0.8377240891858897
AUPRC: 0.7015038683199196
Sensitivity: 0.766497461928934
Specificity: 0.759493670886076
Threshold: 0.13
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5397821836015011
AUROC: 0.8287982768552696
AUPRC: 0.6798408994866411
Sensitivity: 0.7458492975734355
Specificity: 0.7671546203110704
Threshold: 0.13
Accuracy:  0.761535870663523

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0015.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5856853630393744
AUROC: 0.8364687375803483
AUPRC: 0.7011976936816717
Sensitivity: 0.754653130287648
Specificity: 0.7777777777777778
Threshold: 0.13
Accuracy:  0.7709885742672627

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5470299949037268
AUROC: 0.8275137616715684
AUPRC: 0.6776039517901056
Sensitivity: 0.7624521072796935
Specificity: 0.7493138151875571
Threshold: 0.12
Accuracy:  0.7527787133715056

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0016.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5526940170675516
AUROC: 0.8369673085023596
AUPRC: 0.7015038552744287
Sensitivity: 0.7563451776649747
Specificity: 0.7644163150492265
Threshold: 0.16
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.5185487923469949
AUROC: 0.828115378368557
AUPRC: 0.679425553040573
Sensitivity: 0.7611749680715197
Specificity: 0.747483989021043
Threshold: 0.15
Accuracy:  0.7510946446615022

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0017.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.5684973951429129
AUROC: 0.8367828729584175
AUPRC: 0.7032026679455393
Sensitivity: 0.7529610829103215
Specificity: 0.7672292545710268
Threshold: 0.14
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5329737051370296
AUROC: 0.827911261025988
AUPRC: 0.6783924095064559
Sensitivity: 0.7624521072796935
Specificity: 0.7497712717291857
Threshold: 0.13
Accuracy:  0.7531155271135063

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0018.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5257867388427258
AUROC: 0.836310479984579
AUPRC: 0.7027349328983681
Sensitivity: 0.7648054145516074
Specificity: 0.7489451476793249
Threshold: 0.19
Accuracy:  0.7536015896671634

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.4952833706394155
AUROC: 0.8279274005367957
AUPRC: 0.6801285249089394
Sensitivity: 0.7432950191570882
Specificity: 0.7596065873741995
Threshold: 0.19
Accuracy:  0.7553048164365106

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0019.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5853938367217779
AUROC: 0.8371695926473282
AUPRC: 0.704816484848158
Sensitivity: 0.7715736040609137
Specificity: 0.750351617440225
Threshold: 0.12
Accuracy:  0.756582215598609

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5459126585975607
AUROC: 0.8282764083293312
AUPRC: 0.6777043340852478
Sensitivity: 0.7490421455938697
Specificity: 0.7582342177493138
Threshold: 0.12
Accuracy:  0.7558100370495117

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0020.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.547278506681323
AUROC: 0.8370446524401417
AUPRC: 0.7040772157732216
Sensitivity: 0.7698815566835872
Specificity: 0.7433192686357243
Threshold: 0.16
Accuracy:  0.7511177347242921

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5141886624884098
AUROC: 0.8282094403138981
AUPRC: 0.678975074890339
Sensitivity: 0.7490421455938697
Specificity: 0.7580054894784996
Threshold: 0.16
Accuracy:  0.7556416301785113

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0021.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.5912894196808338
AUROC: 0.834767170949141
AUPRC: 0.702953464024877
Sensitivity: 0.7580372250423012
Specificity: 0.7510548523206751
Threshold: 0.11
Accuracy:  0.7531048186785891

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.5530261853907971
AUROC: 0.8264882819848589
AUPRC: 0.6755394904711386
Sensitivity: 0.7445721583652618
Specificity: 0.7623513266239708
Threshold: 0.11
Accuracy:  0.7576625126305153

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0022.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5755870789289474
AUROC: 0.8380679722323364
AUPRC: 0.7049477241530122
Sensitivity: 0.7715736040609137
Specificity: 0.7475386779184248
Threshold: 0.12
Accuracy:  0.754595131644312

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.535362682443984
AUROC: 0.8292222128744512
AUPRC: 0.6774289040347031
Sensitivity: 0.7503192848020435
Specificity: 0.7580054894784996
Threshold: 0.12
Accuracy:  0.7559784439205119

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0023.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5324299652129412
AUROC: 0.8385689229678178
AUPRC: 0.7037210725323143
Sensitivity: 0.754653130287648
Specificity: 0.7728551336146273
Threshold: 0.17
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.50468255389244
AUROC: 0.8295856805001992
AUPRC: 0.6811866885901741
Sensitivity: 0.7496807151979565
Specificity: 0.7591491308325709
Threshold: 0.16
Accuracy:  0.7566520714045133

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0024.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5580567065626383
AUROC: 0.839924226739108
AUPRC: 0.7066727358641954
Sensitivity: 0.7597292724196277
Specificity: 0.770745428973277
Threshold: 0.15
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.5236863322714542
AUROC: 0.830569898541631
AUPRC: 0.6804374619814229
Sensitivity: 0.756066411238825
Specificity: 0.7538883806038427
Threshold: 0.14
Accuracy:  0.754462782081509

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0025.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5945566147565842
AUROC: 0.8389699215375499
AUPRC: 0.7048164842454006
Sensitivity: 0.7698815566835872
Specificity: 0.7609001406469761
Threshold: 0.11
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5529424849342792
AUROC: 0.8297275767422784
AUPRC: 0.6765366486320131
Sensitivity: 0.7445721583652618
Specificity: 0.7687557182067704
Threshold: 0.11
Accuracy:  0.7623779050185248

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0026.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5157655142247677
AUROC: 0.8373242805228925
AUPRC: 0.705277192687151
Sensitivity: 0.7563451776649747
Specificity: 0.7679324894514767
Threshold: 0.19
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.48706478292637684
AUROC: 0.8289508353986064
AUPRC: 0.6800773838929348
Sensitivity: 0.7554278416347382
Specificity: 0.75
Threshold: 0.18
Accuracy:  0.7514314584035029

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0027.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5156387556344271
AUROC: 0.8372421769581699
AUPRC: 0.7056537366280302
Sensitivity: 0.7563451776649747
Specificity: 0.7672292545710268
Threshold: 0.19
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4876450281193916
AUROC: 0.8288918275943862
AUPRC: 0.679400907293477
Sensitivity: 0.7522349936143039
Specificity: 0.7538883806038427
Threshold: 0.18
Accuracy:  0.7534523408555069

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0028.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.544324304908514
AUROC: 0.8372469365851103
AUPRC: 0.7053308704091554
Sensitivity: 0.7563451776649747
Specificity: 0.7616033755274262
Threshold: 0.16
Accuracy:  0.7600596125186289

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5139389805337216
AUROC: 0.8288520995677824
AUPRC: 0.6770505176588997
Sensitivity: 0.7592592592592593
Specificity: 0.7488563586459286
Threshold: 0.15
Accuracy:  0.7515998652745032

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0029.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5757092162966728
AUROC: 0.8364925357150507
AUPRC: 0.7053017515468023
Sensitivity: 0.7648054145516074
Specificity: 0.7566807313642757
Threshold: 0.12
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5398005992174149
AUROC: 0.8277367936444505
AUPRC: 0.6754127774285432
Sensitivity: 0.7656449553001277
Specificity: 0.742451967063129
Threshold: 0.11
Accuracy:  0.7485685415964971

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0030.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.4887922313064337
AUROC: 0.8443947063429168
AUPRC: 0.7007247685726083
Sensitivity: 0.7715736040609137
Specificity: 0.7538677918424754
Threshold: 0.2
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.466264176241895
AUROC: 0.8347581381109791
AUPRC: 0.6860131328674787
Sensitivity: 0.7477650063856961
Specificity: 0.7630375114364135
Threshold: 0.2
Accuracy:  0.759009767598518

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0031.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
Loss: 0.5787774305790663
AUROC: 0.8354585067622399
AUPRC: 0.7033873674624236
Sensitivity: 0.7648054145516074
Specificity: 0.7447257383966245
Threshold: 0.12
Accuracy:  0.7506209637357178

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5412869577078109
AUROC: 0.8268167684989467
AUPRC: 0.673743127254783
Sensitivity: 0.7490421455938697
Specificity: 0.7577767612076852
Threshold: 0.12
Accuracy:  0.755473223307511

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0032.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5054099261760712
AUROC: 0.840786909122063
AUPRC: 0.7058193622836582
Sensitivity: 0.7698815566835872
Specificity: 0.7524613220815752
Threshold: 0.21
Accuracy:  0.7575757575757576

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4774575946812934
AUROC: 0.832495831478385
AUPRC: 0.6839069787787245
Sensitivity: 0.7503192848020435
Specificity: 0.7616651418115279
Threshold: 0.21
Accuracy:  0.7586729538565173

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0033.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.586044616997242
AUROC: 0.8345220501617084
AUPRC: 0.7028269189804958
Sensitivity: 0.7614213197969543
Specificity: 0.7510548523206751
Threshold: 0.11
Accuracy:  0.7540983606557377

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.552978702365084
AUROC: 0.8261099893785953
AUPRC: 0.6721457929470052
Sensitivity: 0.7426564495530013
Specificity: 0.760064043915828
Threshold: 0.11
Accuracy:  0.755473223307511

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0034.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5359113775193691
AUROC: 0.8383749681699948
AUPRC: 0.7040149993046576
Sensitivity: 0.7580372250423012
Specificity: 0.7644163150492265
Threshold: 0.16
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5078169484721854
AUROC: 0.830400178075037
AUPRC: 0.6803053586720923
Sensitivity: 0.7579821200510856
Specificity: 0.7520585544373285
Threshold: 0.15
Accuracy:  0.7536207477265072

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0035.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5272859986871481
AUROC: 0.8350920154878261
AUPRC: 0.6999503126482514
Sensitivity: 0.7681895093062606
Specificity: 0.749648382559775
Threshold: 0.17
Accuracy:  0.7550919026328863

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.49914549417952275
AUROC: 0.8267545473984569
AUPRC: 0.6732693102615867
Sensitivity: 0.743933588761175
Specificity: 0.7596065873741995
Threshold: 0.17
Accuracy:  0.755473223307511

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0036.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
Loss: 0.5788354016840458
AUROC: 0.8426770759707854
AUPRC: 0.7102267985992764
Sensitivity: 0.7580372250423012
Specificity: 0.7637130801687764
Threshold: 0.12
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5445769873071225
AUROC: 0.8339409384460967
AUPRC: 0.6842103373332424
Sensitivity: 0.7675606641123882
Specificity: 0.7513723696248856
Threshold: 0.11
Accuracy:  0.7556416301785113

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0037.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5681354571133852
AUROC: 0.8361926792178029
AUPRC: 0.7036423389311116
Sensitivity: 0.7614213197969543
Specificity: 0.7545710267229254
Threshold: 0.12
Accuracy:  0.756582215598609

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5349627343264032
AUROC: 0.8285417243599407
AUPRC: 0.6768343342658454
Sensitivity: 0.7605363984674329
Specificity: 0.7472552607502287
Threshold: 0.11
Accuracy:  0.7507578309195015

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0038.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5344060696661472
AUROC: 0.8425325023024697
AUPRC: 0.7066227101565578
Sensitivity: 0.7715736040609137
Specificity: 0.7609001406469761
Threshold: 0.16
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5034291645947923
AUROC: 0.8341511902633618
AUPRC: 0.683009359886284
Sensitivity: 0.7522349936143039
Specificity: 0.7653247941445562
Threshold: 0.16
Accuracy:  0.7618726844055237

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0039.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5117585007101297
AUROC: 0.8360927270520537
AUPRC: 0.7038540762389156
Sensitivity: 0.7461928934010152
Specificity: 0.7714486638537271
Threshold: 0.19
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.4841543321913861
AUROC: 0.8291735752536459
AUPRC: 0.6804536097025426
Sensitivity: 0.7528735632183908
Specificity: 0.7543458371454712
Threshold: 0.18
Accuracy:  0.7539575614685079

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0040.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.4698861576616764
AUROC: 0.8414603963341354
AUPRC: 0.7061748054184639
Sensitivity: 0.7597292724196277
Specificity: 0.7679324894514767
Threshold: 0.22
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.61it/s]
Loss: 0.4486726611218554
AUROC: 0.8332293393813411
AUPRC: 0.6848052068581446
Sensitivity: 0.7650063856960408
Specificity: 0.7435956084172004
Threshold: 0.21
Accuracy:  0.7492421690804985

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0041.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.48407377302646637
AUROC: 0.8404073288735628
AUPRC: 0.7063022255269115
Sensitivity: 0.7597292724196277
Specificity: 0.7630098452883263
Threshold: 0.21
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4587098521755097
AUROC: 0.8326475136681939
AUPRC: 0.68336034583621
Sensitivity: 0.7650063856960408
Specificity: 0.7465690759377859
Threshold: 0.2
Accuracy:  0.7514314584035029

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0042.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.4947038572281599
AUROC: 0.8396231803351253
AUPRC: 0.7051827072330853
Sensitivity: 0.7529610829103215
Specificity: 0.7679324894514767
Threshold: 0.2
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.4696943614077061
AUROC: 0.8319424142254378
AUPRC: 0.6839156759911077
Sensitivity: 0.7528735632183908
Specificity: 0.7536596523330283
Threshold: 0.19
Accuracy:  0.7534523408555069

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0043.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
Loss: 0.5319897476583719
AUROC: 0.837824041351639
AUPRC: 0.7024171896076586
Sensitivity: 0.7597292724196277
Specificity: 0.759493670886076
Threshold: 0.15
Accuracy:  0.7595628415300546

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.5002479096676441
AUROC: 0.8301279242456641
AUPRC: 0.6766272906742516
Sensitivity: 0.7592592592592593
Specificity: 0.7532021957913998
Threshold: 0.14
Accuracy:  0.7547995958235096

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0044.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.58it/s]
Loss: 0.4545628074556589
AUROC: 0.84702202041404
AUPRC: 0.7113165141190716
Sensitivity: 0.7648054145516074
Specificity: 0.7658227848101266
Threshold: 0.22
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.61it/s]
Loss: 0.43435411725906614
AUROC: 0.8373145343816859
AUPRC: 0.6893812058282826
Sensitivity: 0.7675606641123882
Specificity: 0.7477127172918573
Threshold: 0.21
Accuracy:  0.7529471202425059

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0045.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5516787208616734
AUROC: 0.8442614367885845
AUPRC: 0.7092672645404269
Sensitivity: 0.7597292724196277
Specificity: 0.7735583684950773
Threshold: 0.13
Accuracy:  0.76949826130154

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.5170044233190253
AUROC: 0.8361085988976641
AUPRC: 0.6856710163842965
Sensitivity: 0.7650063856960408
Specificity: 0.752516010978957
Threshold: 0.12
Accuracy:  0.7558100370495117

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0046.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5109621342271566
AUROC: 0.8389473133095828
AUPRC: 0.7047454632461083
Sensitivity: 0.754653130287648
Specificity: 0.7679324894514767
Threshold: 0.18
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4848348504685341
AUROC: 0.8315267305353118
AUPRC: 0.6826251200675335
Sensitivity: 0.7503192848020435
Specificity: 0.755946935041171
Threshold: 0.17
Accuracy:  0.754462782081509

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0047.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5396117176860571
AUROC: 0.8367019593004299
AUPRC: 0.7033454334032326
Sensitivity: 0.7631133671742809
Specificity: 0.750351617440225
Threshold: 0.13
Accuracy:  0.7540983606557377

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5064101469643573
AUROC: 0.8292026409789922
AUPRC: 0.6775206448472711
Sensitivity: 0.7509578544061303
Specificity: 0.7621225983531564
Threshold: 0.13
Accuracy:  0.7591781744695184

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0048.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.4776448979973793
AUROC: 0.8414104202512607
AUPRC: 0.7065006936414036
Sensitivity: 0.7681895093062606
Specificity: 0.7517580872011251
Threshold: 0.23
Accuracy:  0.756582215598609

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.454645870530859
AUROC: 0.8329690623835179
AUPRC: 0.681861026334077
Sensitivity: 0.7484035759897829
Specificity: 0.7657822506861848
Threshold: 0.23
Accuracy:  0.7611990569215223

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0049.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.4732330944389105
AUROC: 0.8469006499270587
AUPRC: 0.7123220433286236
Sensitivity: 0.7597292724196277
Specificity: 0.7742616033755274
Threshold: 0.22
Accuracy:  0.7699950322901142

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.44938666040593006
AUROC: 0.8372354434757817
AUPRC: 0.6880315481325455
Sensitivity: 0.7669220945083014
Specificity: 0.7429094236047575
Threshold: 0.21
Accuracy:  0.7492421690804985

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0050.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.5327041987329721
AUROC: 0.8418007096603768
AUPRC: 0.7084228670549787
Sensitivity: 0.7698815566835872
Specificity: 0.7517580872011251
Threshold: 0.14
Accuracy:  0.7570789865871833

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.63it/s]
Loss: 0.5013479825029982
AUROC: 0.8335723587581021
AUPRC: 0.681421714560827
Sensitivity: 0.764367816091954
Specificity: 0.7479414455626715
Threshold: 0.13
Accuracy:  0.7522734927585045

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0051.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5380129851400852
AUROC: 0.8395767739724561
AUPRC: 0.7096907578661407
Sensitivity: 0.7614213197969543
Specificity: 0.7573839662447257
Threshold: 0.14
Accuracy:  0.7585692995529061

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.5048546188689292
AUROC: 0.8323277176599257
AUPRC: 0.6818039065691197
Sensitivity: 0.7650063856960408
Specificity: 0.7511436413540714
Threshold: 0.13
Accuracy:  0.7547995958235096

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0052.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5169755816459656
AUROC: 0.8393554513197256
AUPRC: 0.7075737358547604
Sensitivity: 0.7681895093062606
Specificity: 0.760196905766526
Threshold: 0.16
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.4854697455117043
AUROC: 0.8320322404620604
AUPRC: 0.6794916997239961
Sensitivity: 0.7656449553001277
Specificity: 0.7509149130832571
Threshold: 0.15
Accuracy:  0.7547995958235096

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0053.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.5331932809203863
AUROC: 0.8410546381374627
AUPRC: 0.7113805651961309
Sensitivity: 0.7580372250423012
Specificity: 0.7686357243319268
Threshold: 0.13
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.4996796710059998
AUROC: 0.8344107369665782
AUPRC: 0.6835615197632199
Sensitivity: 0.7541507024265645
Specificity: 0.755946935041171
Threshold: 0.12
Accuracy:  0.755473223307511

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0054.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.4538446869701147
AUROC: 0.8426967094319148
AUPRC: 0.7114590273770801
Sensitivity: 0.7597292724196277
Specificity: 0.7630098452883263
Threshold: 0.24
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.43429331227819973
AUROC: 0.8343133886955069
AUPRC: 0.6845401103829488
Sensitivity: 0.7656449553001277
Specificity: 0.7483989021043
Threshold: 0.23
Accuracy:  0.7529471202425059

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0055.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5680095460265875
AUROC: 0.839500619941409
AUPRC: 0.7088938791467863
Sensitivity: 0.7732656514382402
Specificity: 0.749648382559775
Threshold: 0.1
Accuracy:  0.756582215598609

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.5280379221794453
AUROC: 0.8319470881109207
AUPRC: 0.6791974612640138
Sensitivity: 0.7554278416347382
Specificity: 0.7591491308325709
Threshold: 0.1
Accuracy:  0.7581677332435164

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0056.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5161662865430117
AUROC: 0.8398694910292931
AUPRC: 0.7071256040559701
Sensitivity: 0.7681895093062606
Specificity: 0.7644163150492265
Threshold: 0.15
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4848795025906664
AUROC: 0.8323375036076551
AUPRC: 0.6806973047477305
Sensitivity: 0.7586206896551724
Specificity: 0.7504574565416285
Threshold: 0.14
Accuracy:  0.7526103065005052

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0057.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.49692144244909286
AUROC: 0.8481940785481235
AUPRC: 0.7109833734319918
Sensitivity: 0.7800338409475466
Specificity: 0.7524613220815752
Threshold: 0.2
Accuracy:  0.7605563835072032

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.47016972113162914
AUROC: 0.8375809458542052
AUPRC: 0.6856391376733765
Sensitivity: 0.7573435504469987
Specificity: 0.760064043915828
Threshold: 0.2
Accuracy:  0.7593465813405187

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0058.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.4819302335381508
AUROC: 0.8431774317528993
AUPRC: 0.7069560208270946
Sensitivity: 0.766497461928934
Specificity: 0.7468354430379747
Threshold: 0.19
Accuracy:  0.7526080476900149

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.46016644829131187
AUROC: 0.8337200973570346
AUPRC: 0.6813395362153373
Sensitivity: 0.7471264367816092
Specificity: 0.7628087831655993
Threshold: 0.19
Accuracy:  0.7586729538565173

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0059.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.4762381464242935
AUROC: 0.8463342543211463
AUPRC: 0.7141597041881476
Sensitivity: 0.7597292724196277
Specificity: 0.7728551336146273
Threshold: 0.19
Accuracy:  0.7690014903129657

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.45263075828552246
AUROC: 0.836052220154028
AUPRC: 0.6850164942213541
Sensitivity: 0.7554278416347382
Specificity: 0.7518298261665142
Threshold: 0.18
Accuracy:  0.7527787133715056

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0060.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
Loss: 0.5197896342724562
AUROC: 0.846416357885869
AUPRC: 0.7132833668564452
Sensitivity: 0.7631133671742809
Specificity: 0.7637130801687764
Threshold: 0.15
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.65it/s]
Loss: 0.4879587343398561
AUROC: 0.8357985158076651
AUPRC: 0.6844571106296589
Sensitivity: 0.7624521072796935
Specificity: 0.75
Threshold: 0.14
Accuracy:  0.7532839339845065

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0061.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.4927883427590132
AUROC: 0.8419375489349146
AUPRC: 0.7096988094509784
Sensitivity: 0.766497461928934
Specificity: 0.749648382559775
Threshold: 0.17
Accuracy:  0.754595131644312

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.4697331085484079
AUROC: 0.8325143809613949
AUPRC: 0.6809990460693274
Sensitivity: 0.7611749680715197
Specificity: 0.7458828911253431
Threshold: 0.16
Accuracy:  0.7499157965644998

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0062.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5321589335799217
AUROC: 0.8465710457614332
AUPRC: 0.7123798772824005
Sensitivity: 0.7648054145516074
Specificity: 0.7552742616033755
Threshold: 0.12
Accuracy:  0.7580725285643318

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5026965566138004
AUROC: 0.8355518222895262
AUPRC: 0.6824644756814356
Sensitivity: 0.7528735632183908
Specificity: 0.762580054894785
Threshold: 0.12
Accuracy:  0.76002020882452

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0063.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.445432772859931
AUROC: 0.8433297398149934
AUPRC: 0.7098825173702752
Sensitivity: 0.7614213197969543
Specificity: 0.7770745428973277
Threshold: 0.27
Accuracy:  0.7724788872329856

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.42817513295944704
AUROC: 0.8332343053846666
AUPRC: 0.6830315402444084
Sensitivity: 0.7567049808429118
Specificity: 0.760064043915828
Threshold: 0.26
Accuracy:  0.7591781744695184

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0064.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.59it/s]
Loss: 0.5294871386140585
AUROC: 0.8472135953983928
AUPRC: 0.716910683743066
Sensitivity: 0.7580372250423012
Specificity: 0.7742616033755274
Threshold: 0.13
Accuracy:  0.76949826130154

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.4976235009888385
AUROC: 0.8378595532466562
AUPRC: 0.6871149301163373
Sensitivity: 0.7681992337164751
Specificity: 0.7463403476669717
Threshold: 0.12
Accuracy:  0.7521050858875042

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0065.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
Loss: 0.5849390383809805
AUROC: 0.8439401619701047
AUPRC: 0.7093993271189833
Sensitivity: 0.7648054145516074
Specificity: 0.7693389592123769
Threshold: 0.1
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5483295847760871
AUROC: 0.8331843532335693
AUPRC: 0.6763298524006505
Sensitivity: 0.7637292464878672
Specificity: 0.7490850869167429
Threshold: 0.09
Accuracy:  0.7529471202425059

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0066.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.525301855057478
AUROC: 0.8400289385317979
AUPRC: 0.707153949024081
Sensitivity: 0.7783417935702199
Specificity: 0.7566807313642757
Threshold: 0.13
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.49240644181028326
AUROC: 0.8326792814835848
AUPRC: 0.677004822240091
Sensitivity: 0.7528735632183908
Specificity: 0.7616651418115279
Threshold: 0.13
Accuracy:  0.7593465813405187

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0067.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.62it/s]
Loss: 0.49375125393271446
AUROC: 0.8447017022805753
AUPRC: 0.7122252997521366
Sensitivity: 0.7648054145516074
Specificity: 0.7686357243319268
Threshold: 0.17
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.4653581586924005
AUROC: 0.8350156399892967
AUPRC: 0.6821282648696225
Sensitivity: 0.7554278416347382
Specificity: 0.752516010978957
Threshold: 0.16
Accuracy:  0.7532839339845065

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0068.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.63it/s]
Loss: 0.5029573887586594
AUROC: 0.8461402995233233
AUPRC: 0.7151232844157684
Sensitivity: 0.7563451776649747
Specificity: 0.7791842475386779
Threshold: 0.18
Accuracy:  0.7724788872329856

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.47332948604796793
AUROC: 0.8375538519242971
AUPRC: 0.6867051072270433
Sensitivity: 0.7650063856960408
Specificity: 0.7458828911253431
Threshold: 0.17
Accuracy:  0.7509262377905018

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0069.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5266091395169497
AUROC: 0.8469113590876747
AUPRC: 0.7104913418443672
Sensitivity: 0.7749576988155669
Specificity: 0.7609001406469761
Threshold: 0.15
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.64it/s]
Loss: 0.4949849561807957
AUROC: 0.8383747760916738
AUPRC: 0.6852515112974555
Sensitivity: 0.7707535121328225
Specificity: 0.7438243366880146
Threshold: 0.14
Accuracy:  0.7509262377905018

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0070.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.498222429305315
AUROC: 0.8495815098012617
AUPRC: 0.7136848390207909
Sensitivity: 0.7648054145516074
Specificity: 0.7637130801687764
Threshold: 0.16
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.47031273772107796
AUROC: 0.8375700644645656
AUPRC: 0.6807477770208114
Sensitivity: 0.7637292464878672
Specificity: 0.7504574565416285
Threshold: 0.15
Accuracy:  0.7539575614685079

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0071.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5144287180155516
AUROC: 0.8425598701573771
AUPRC: 0.7075069467708935
Sensitivity: 0.7563451776649747
Specificity: 0.7791842475386779
Threshold: 0.17
Accuracy:  0.7724788872329856

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.67it/s]
Loss: 0.48191905719168643
AUROC: 0.8337573423819757
AUPRC: 0.6777635558210907
Sensitivity: 0.7592592592592593
Specificity: 0.7513723696248856
Threshold: 0.16
Accuracy:  0.7534523408555069

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0072.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.60it/s]
Loss: 0.5036875065416098
AUROC: 0.8508808879560021
AUPRC: 0.7120267243070963
Sensitivity: 0.7783417935702199
Specificity: 0.7644163150492265
Threshold: 0.13
Accuracy:  0.7685047193243915

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.48491274993470374
AUROC: 0.8361592813433681
AUPRC: 0.6831431746655907
Sensitivity: 0.7554278416347382
Specificity: 0.7596065873741995
Threshold: 0.12
Accuracy:  0.7585045469855171

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0073.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.64it/s]
Loss: 0.5971893146634102
AUROC: 0.8462390617823375
AUPRC: 0.714767456932533
Sensitivity: 0.7648054145516074
Specificity: 0.7672292545710268
Threshold: 0.08
Accuracy:  0.7665176353700944

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.66it/s]
Loss: 0.5610138041541931
AUROC: 0.835949686791249
AUPRC: 0.681755610756441
Sensitivity: 0.7458492975734355
Specificity: 0.7671546203110704
Threshold: 0.08
Accuracy:  0.761535870663523

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0074.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.61it/s]
Loss: 0.46344713121652603
AUROC: 0.8495624712934999
AUPRC: 0.7160577398762935
Sensitivity: 0.754653130287648
Specificity: 0.7721518987341772
Threshold: 0.18
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:17<00:00,  2.61it/s]
Loss: 0.4405608231082876
AUROC: 0.8389211094869358
AUPRC: 0.6873305074488191
Sensitivity: 0.7509578544061303
Specificity: 0.7584629460201281
Threshold: 0.17
Accuracy:  0.756483664533513

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0075.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.50it/s]
Loss: 0.4550224430859089
AUROC: 0.8424194611626341
AUPRC: 0.7097122598062213
Sensitivity: 0.7597292724196277
Specificity: 0.7735583684950773
Threshold: 0.29
Accuracy:  0.76949826130154

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.4364504728545534
AUROC: 0.8361008577748332
AUPRC: 0.6852920535592087
Sensitivity: 0.7592592592592593
Specificity: 0.760064043915828
Threshold: 0.28
Accuracy:  0.7598518019535198

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0076.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.50it/s]
Loss: 0.5138803776353598
AUROC: 0.8429120825509696
AUPRC: 0.7111565865265534
Sensitivity: 0.7631133671742809
Specificity: 0.7658227848101266
Threshold: 0.14
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.48226608399381027
AUROC: 0.8351520590218259
AUPRC: 0.6850449664956817
Sensitivity: 0.7618135376756067
Specificity: 0.7497712717291857
Threshold: 0.13
Accuracy:  0.7529471202425059

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0077.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5031171310693026
AUROC: 0.841380672582883
AUPRC: 0.7085088882414033
Sensitivity: 0.7698815566835872
Specificity: 0.7559774964838256
Threshold: 0.13
Accuracy:  0.7600596125186289

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.57it/s]
Loss: 0.47597358962322805
AUROC: 0.8328313288206968
AUPRC: 0.6800189351152854
Sensitivity: 0.7515964240102171
Specificity: 0.762580054894785
Threshold: 0.13
Accuracy:  0.7596833950825194

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0078.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
Loss: 0.5950289741158485
AUROC: 0.8471469606212266
AUPRC: 0.7123491080498481
Sensitivity: 0.7631133671742809
Specificity: 0.7672292545710268
Threshold: 0.07
Accuracy:  0.7660208643815202

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.5605939465000275
AUROC: 0.836886654771628
AUPRC: 0.6846627832929533
Sensitivity: 0.743933588761175
Specificity: 0.7715004574565416
Threshold: 0.07
Accuracy:  0.7642303805995284

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0079.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.5406040456146002
AUROC: 0.8507761761633124
AUPRC: 0.7185804089190824
Sensitivity: 0.7563451776649747
Specificity: 0.7686357243319268
Threshold: 0.12
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.54it/s]
Loss: 0.5108303516469104
AUROC: 0.8388099586477981
AUPRC: 0.6904628821481089
Sensitivity: 0.7547892720306514
Specificity: 0.7586916742909423
Threshold: 0.11
Accuracy:  0.7576625126305153

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0080.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
Loss: 0.47184606827795506
AUROC: 0.8514341945878282
AUPRC: 0.7176993759572797
Sensitivity: 0.7563451776649747
Specificity: 0.7721518987341772
Threshold: 0.18
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.4516367969360757
AUROC: 0.8383523560472482
AUPRC: 0.6875843451771518
Sensitivity: 0.7496807151979565
Specificity: 0.7602927721866423
Threshold: 0.17
Accuracy:  0.757494105759515

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0081.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.4687056466937065
AUROC: 0.8498290104021644
AUPRC: 0.7136608511986033
Sensitivity: 0.7563451776649747
Specificity: 0.7658227848101266
Threshold: 0.2
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.57it/s]
Loss: 0.44800116026655157
AUROC: 0.8398458085179228
AUPRC: 0.6874143977714138
Sensitivity: 0.764367816091954
Specificity: 0.7467978042086002
Threshold: 0.19
Accuracy:  0.7514314584035029

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0082.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
Loss: 0.5102996975183487
AUROC: 0.8500788908165378
AUPRC: 0.7160384656167222
Sensitivity: 0.766497461928934
Specificity: 0.7609001406469761
Threshold: 0.17
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.48154214531817335
AUROC: 0.8392700442500108
AUPRC: 0.6858661325065765
Sensitivity: 0.7490421455938697
Specificity: 0.7680695333943275
Threshold: 0.17
Accuracy:  0.7630515325025261

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0083.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
Loss: 0.4745838325470686
AUROC: 0.8490472416771974
AUPRC: 0.7135881136295479
Sensitivity: 0.7648054145516074
Specificity: 0.7644163150492265
Threshold: 0.2
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.57it/s]
Loss: 0.45251430729602243
AUROC: 0.838550119826739
AUPRC: 0.6903051455606297
Sensitivity: 0.743933588761175
Specificity: 0.7737877401646843
Threshold: 0.2
Accuracy:  0.7659144493095318


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:      63, 0.4436
  Epoch with best model Test AUROC:     81, 0.8398
  Epoch with best model Test Accuracy:   2, 0.7664

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:   63, 0.4436
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0063.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.53it/s]
Loss: 0.42817513295944704
AUROC: 0.8332343053846666
AUPRC: 0.6830315402444084
Sensitivity: 0.7567049808429118
Specificity: 0.760064043915828
Threshold: 0.26
Accuracy:  0.7591781744695184
best_model_val_test_auroc: 0.8332343053846666
best_model_val_test_auprc: 0.6830315402444084

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:  81, 0.8398
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/ABP_EEG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_418703c7_0081.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.56it/s]
Loss: 0.44800116026655157
AUROC: 0.8398458085179228
AUPRC: 0.6874143977714138
Sensitivity: 0.764367816091954
Specificity: 0.7467978042086002
Threshold: 0.19
Accuracy:  0.7514314584035029
best_model_auroc_test_auroc: 0.8398458085179228
best_model_auroc_test_auprc: 0.6874143977714138

Total Processing Time: 6530.2550 sec
Experiment Setup
  name:              ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905
  prediction_window: 003
  max_cases:         _ALL
  use_abp:           True
  use_eeg:           True
  use_ecg:           True
  n_residuals:       12
  skip_connection:   False
  batch_size:        128
  learning_rate:     0.0001
  weight_decay:      0.1
  max_epochs:        200
  patience:          20
  device:            mps

Model Architecture
HypotensionCNN(
  (abpResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (abpFc): Linear(in_features=2814, out_features=32, bias=True)
  (ecgResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(15,), stride=(1,), padding=(7,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
  )
  (ecgFc): Linear(in_features=2814, out_features=32, bias=True)
  (eegResiduals): Sequential(
    (0): ResidualBlock(
      (bn1): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(1, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (2): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (3): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (4): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 2, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (5): ResidualBlock(
      (bn1): BatchNorm1d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
      (residualConv): Conv1d(2, 4, kernel_size=(7,), stride=(1,), padding=(3,), bias=False)
    )
    (6): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (7): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (8): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (9): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 4, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
    (10): ResidualBlock(
      (bn1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(4, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (downsample): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (11): ResidualBlock(
      (bn1): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (dropout): Dropout(p=0.5, inplace=False)
      (conv1): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (bn2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (residualConv): Conv1d(6, 6, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
    )
  )
  (eegFc): Linear(in_features=720, out_features=32, bias=True)
  (fullLinear1): Linear(in_features=96, out_features=16, bias=True)
  (fullLinear2): Linear(in_features=16, out_features=1, bias=True)
  (sigmoid): Sigmoid()
)

Training Loop
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.59it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.27it/s]
[2024-05-05 18:44:36.938433] Completed epoch 0 with training loss 0.52795988, validation loss 0.55883896
Validation loss improved to 0.55883896. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
[2024-05-05 18:45:41.284010] Completed epoch 1 with training loss 0.44101849, validation loss 0.58662617
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.61it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 18:46:44.824980] Completed epoch 2 with training loss 0.43648526, validation loss 0.61626756
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 18:47:47.567301] Completed epoch 3 with training loss 0.43333560, validation loss 0.61614990
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 18:48:50.091047] Completed epoch 4 with training loss 0.43109053, validation loss 0.59263766
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 18:49:52.750324] Completed epoch 5 with training loss 0.43332738, validation loss 0.57566094
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 18:50:55.408254] Completed epoch 6 with training loss 0.43011338, validation loss 0.65960628
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.63it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 18:51:58.190482] Completed epoch 7 with training loss 0.43400714, validation loss 0.56841421
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 18:53:00.685662] Completed epoch 8 with training loss 0.43240178, validation loss 0.62533641
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.63it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 18:54:03.408473] Completed epoch 9 with training loss 0.43124446, validation loss 0.59478712
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 18:55:05.982956] Completed epoch 10 with training loss 0.43058681, validation loss 0.61887527
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 18:56:10.008998] Completed epoch 11 with training loss 0.43051803, validation loss 0.55342233
Validation loss improved to 0.55342233. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 18:57:12.669245] Completed epoch 12 with training loss 0.43200588, validation loss 0.53479171
Validation loss improved to 0.53479171. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.61it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
[2024-05-05 18:58:16.528026] Completed epoch 13 with training loss 0.43015614, validation loss 0.54621005
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.62it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 18:59:19.770229] Completed epoch 14 with training loss 0.42898878, validation loss 0.58597779
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:00:22.448186] Completed epoch 15 with training loss 0.42603734, validation loss 0.56519854
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:01:24.976329] Completed epoch 16 with training loss 0.42759663, validation loss 0.51889825
Validation loss improved to 0.51889825. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.63it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:02:27.818996] Completed epoch 17 with training loss 0.42720827, validation loss 0.55076563
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.62it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.17it/s]
[2024-05-05 19:03:31.944865] Completed epoch 18 with training loss 0.42678925, validation loss 0.55994153
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:58<00:00,  1.58it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 19:04:36.829636] Completed epoch 19 with training loss 0.42527589, validation loss 0.52013195
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.63it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.50it/s]
[2024-05-05 19:05:39.630230] Completed epoch 20 with training loss 0.42659536, validation loss 0.51356888
Validation loss improved to 0.51356888. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:06:42.129587] Completed epoch 21 with training loss 0.42402133, validation loss 0.51686382
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:07:44.470911] Completed epoch 22 with training loss 0.42523128, validation loss 0.50877386
Validation loss improved to 0.50877386. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 19:08:46.780018] Completed epoch 23 with training loss 0.42480895, validation loss 0.49736953
Validation loss improved to 0.49736953. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:09:49.381774] Completed epoch 24 with training loss 0.42440552, validation loss 0.52411389
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:10:51.871369] Completed epoch 25 with training loss 0.42503220, validation loss 0.50890601
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.63it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:11:54.828255] Completed epoch 26 with training loss 0.42392814, validation loss 0.50457764
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:12:57.462194] Completed epoch 27 with training loss 0.42464566, validation loss 0.52180785
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.63it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:14:00.246530] Completed epoch 28 with training loss 0.42487147, validation loss 0.51401305
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.27it/s]
[2024-05-05 19:15:03.553281] Completed epoch 29 with training loss 0.42412812, validation loss 0.51007569
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.63it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 19:16:06.478817] Completed epoch 30 with training loss 0.42311168, validation loss 0.53608447
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:17:09.156641] Completed epoch 31 with training loss 0.42329684, validation loss 0.48148870
Validation loss improved to 0.48148870. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:18:11.821567] Completed epoch 32 with training loss 0.42420840, validation loss 0.50467837
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:19:14.247947] Completed epoch 33 with training loss 0.42186087, validation loss 0.53937936
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:20:16.773620] Completed epoch 34 with training loss 0.42259124, validation loss 0.49736646
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:21:19.099258] Completed epoch 35 with training loss 0.42325455, validation loss 0.54012871
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:22:21.490039] Completed epoch 36 with training loss 0.42164868, validation loss 0.49783373
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.63it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:23:24.384014] Completed epoch 37 with training loss 0.42173618, validation loss 0.50298530
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.30it/s]
[2024-05-05 19:24:27.533294] Completed epoch 38 with training loss 0.42231971, validation loss 0.57625699
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.47it/s]
[2024-05-05 19:25:30.057507] Completed epoch 39 with training loss 0.42192620, validation loss 0.48255572
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:26:32.549272] Completed epoch 40 with training loss 0.42344323, validation loss 0.49366909
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.50it/s]
[2024-05-05 19:27:34.976705] Completed epoch 41 with training loss 0.42205566, validation loss 0.53150809
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:28:37.432443] Completed epoch 42 with training loss 0.42132941, validation loss 0.55494201
No improvement in validation loss. 11 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.63it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
[2024-05-05 19:29:40.333653] Completed epoch 43 with training loss 0.42170078, validation loss 0.55957139
No improvement in validation loss. 12 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.61it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:30:44.017312] Completed epoch 44 with training loss 0.42294267, validation loss 0.48438299
No improvement in validation loss. 13 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:31:47.792074] Completed epoch 45 with training loss 0.42158300, validation loss 0.48267502
No improvement in validation loss. 14 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.55it/s]
[2024-05-05 19:32:50.143443] Completed epoch 46 with training loss 0.42053935, validation loss 0.51295334
No improvement in validation loss. 15 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:33:52.343047] Completed epoch 47 with training loss 0.42210254, validation loss 0.47060552
Validation loss improved to 0.47060552. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.56it/s]
[2024-05-05 19:34:54.437554] Completed epoch 48 with training loss 0.42253798, validation loss 0.52060413
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:35:56.767524] Completed epoch 49 with training loss 0.42142218, validation loss 0.50695360
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:36:59.167083] Completed epoch 50 with training loss 0.42049628, validation loss 0.58063203
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.62it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:38:02.398635] Completed epoch 51 with training loss 0.42058975, validation loss 0.45631829
Validation loss improved to 0.45631829. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:39:04.898560] Completed epoch 52 with training loss 0.42128044, validation loss 0.48241997
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:40:07.582148] Completed epoch 53 with training loss 0.42017937, validation loss 0.50073361
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:41:09.917517] Completed epoch 54 with training loss 0.42052335, validation loss 0.47504681
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:42:12.224154] Completed epoch 55 with training loss 0.42009372, validation loss 0.46775323
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:43:14.433605] Completed epoch 56 with training loss 0.41947418, validation loss 0.45906937
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:44:16.701533] Completed epoch 57 with training loss 0.42154366, validation loss 0.54398131
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:45:18.948182] Completed epoch 58 with training loss 0.42046031, validation loss 0.48395336
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:46:21.482367] Completed epoch 59 with training loss 0.42063859, validation loss 0.48307282
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:47:23.873672] Completed epoch 60 with training loss 0.41966885, validation loss 0.47429207
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:48:26.298665] Completed epoch 61 with training loss 0.42009506, validation loss 0.57645512
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:49:28.684725] Completed epoch 62 with training loss 0.42023227, validation loss 0.47090453
No improvement in validation loss. 11 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 19:50:31.107170] Completed epoch 63 with training loss 0.41993958, validation loss 0.48285058
No improvement in validation loss. 12 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.46it/s]
[2024-05-05 19:51:33.632713] Completed epoch 64 with training loss 0.42004743, validation loss 0.49716035
No improvement in validation loss. 13 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:52:35.968277] Completed epoch 65 with training loss 0.42098689, validation loss 0.52881181
No improvement in validation loss. 14 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.51it/s]
[2024-05-05 19:53:38.482043] Completed epoch 66 with training loss 0.41828343, validation loss 0.45519739
Validation loss improved to 0.45519739. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:54:40.908845] Completed epoch 67 with training loss 0.41982007, validation loss 0.48882213
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:55:43.380365] Completed epoch 68 with training loss 0.41797626, validation loss 0.45868844
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.65it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:56:45.658889] Completed epoch 69 with training loss 0.41833222, validation loss 0.44134000
Validation loss improved to 0.44134000. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:57:48.343141] Completed epoch 70 with training loss 0.41860849, validation loss 0.44330549
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:55<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 19:58:50.690290] Completed epoch 71 with training loss 0.41976947, validation loss 0.44839773
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 19:59:53.257378] Completed epoch 72 with training loss 0.41977385, validation loss 0.43812793
Validation loss improved to 0.43812793. Model saved.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.54it/s]
[2024-05-05 20:00:55.743028] Completed epoch 73 with training loss 0.41795000, validation loss 0.44566932
No improvement in validation loss. 1 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.53it/s]
[2024-05-05 20:01:58.284632] Completed epoch 74 with training loss 0.41824079, validation loss 0.53496170
No improvement in validation loss. 2 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:56<00:00,  1.64it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.52it/s]
[2024-05-05 20:03:00.942988] Completed epoch 75 with training loss 0.41885728, validation loss 0.51224285
No improvement in validation loss. 3 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
[2024-05-05 20:04:05.146604] Completed epoch 76 with training loss 0.42082226, validation loss 0.47327057
No improvement in validation loss. 4 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
[2024-05-05 20:05:09.369277] Completed epoch 77 with training loss 0.41738975, validation loss 0.45565331
No improvement in validation loss. 5 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.45it/s]
[2024-05-05 20:06:13.550641] Completed epoch 78 with training loss 0.42033461, validation loss 0.49887037
No improvement in validation loss. 6 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
[2024-05-05 20:07:17.790894] Completed epoch 79 with training loss 0.41854209, validation loss 0.48892725
No improvement in validation loss. 7 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
[2024-05-05 20:08:22.141654] Completed epoch 80 with training loss 0.41978526, validation loss 0.51650363
No improvement in validation loss. 8 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
[2024-05-05 20:09:26.375023] Completed epoch 81 with training loss 0.41913119, validation loss 0.44872034
No improvement in validation loss. 9 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
[2024-05-05 20:10:30.731075] Completed epoch 82 with training loss 0.41983211, validation loss 0.44344380
No improvement in validation loss. 10 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:58<00:00,  1.58it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
[2024-05-05 20:11:35.772960] Completed epoch 83 with training loss 0.41888970, validation loss 0.48271114
No improvement in validation loss. 11 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
[2024-05-05 20:12:39.978045] Completed epoch 84 with training loss 0.41925165, validation loss 0.45615065
No improvement in validation loss. 12 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
[2024-05-05 20:13:44.380370] Completed epoch 85 with training loss 0.41885811, validation loss 0.51531744
No improvement in validation loss. 13 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
[2024-05-05 20:14:48.620360] Completed epoch 86 with training loss 0.41910076, validation loss 0.53633815
No improvement in validation loss. 14 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.59it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
[2024-05-05 20:15:52.979971] Completed epoch 87 with training loss 0.41956797, validation loss 0.45382959
No improvement in validation loss. 15 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
[2024-05-05 20:16:57.303718] Completed epoch 88 with training loss 0.42118716, validation loss 0.44256198
No improvement in validation loss. 16 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.44it/s]
[2024-05-05 20:18:01.596035] Completed epoch 89 with training loss 0.41840437, validation loss 0.54899830
No improvement in validation loss. 17 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
[2024-05-05 20:19:05.936501] Completed epoch 90 with training loss 0.41724101, validation loss 0.49676931
No improvement in validation loss. 18 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
[2024-05-05 20:20:10.310001] Completed epoch 91 with training loss 0.41882050, validation loss 0.48618519
No improvement in validation loss. 19 epochs without improvement.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 92/92 [00:57<00:00,  1.60it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
[2024-05-05 20:21:14.533329] Completed epoch 92 with training loss 0.41939694, validation loss 0.55099678
No improvement in validation loss. 20 epochs without improvement.
Early stopping due to no improvement in validation loss.

Plot Validation and Loss Values from Training
  Epoch with best Validation Loss:   72, 0.4381
Generate AUROC/AUPRC for Each Intermediate Model

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0000.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:05<00:00,  2.70it/s]
Loss: 0.5592644829303026
AUROC: 0.83756583159012
AUPRC: 0.6911777009854443
Sensitivity: 0.7563451776649747
Specificity: 0.7749648382559775
Threshold: 0.19
Accuracy:  0.76949826130154

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.40it/s]
Loss: 0.5227776962391874
AUROC: 0.8262150787724973
AUPRC: 0.6702724685120358
Sensitivity: 0.768837803320562
Specificity: 0.7355901189387009
Threshold: 0.18
Accuracy:  0.7443583698214887

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0001.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.38it/s]
Loss: 0.5873588155955076
AUROC: 0.8402324125835018
AUPRC: 0.7020478927693069
Sensitivity: 0.7918781725888325
Specificity: 0.7433192686357243
Threshold: 0.14
Accuracy:  0.7575757575757576

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.5460975563272517
AUROC: 0.8301681634784925
AUPRC: 0.6799926786338009
Sensitivity: 0.7567049808429118
Specificity: 0.7541171088746569
Threshold: 0.14
Accuracy:  0.7547995958235096

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0002.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.38it/s]
Loss: 0.6108043380081654
AUROC: 0.8405596369356569
AUPRC: 0.7057239962422482
Sensitivity: 0.7749576988155669
Specificity: 0.7552742616033755
Threshold: 0.12
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.43it/s]
Loss: 0.5688545462932992
AUROC: 0.8311756048884167
AUPRC: 0.682544508316036
Sensitivity: 0.7464878671775224
Specificity: 0.7657822506861848
Threshold: 0.12
Accuracy:  0.7606938363085214

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0003.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
Loss: 0.616171034052968
AUROC: 0.8403752013917148
AUPRC: 0.7065596565948282
Sensitivity: 0.751269035532995
Specificity: 0.7869198312236287
Threshold: 0.12
Accuracy:  0.7764530551415797

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.5712531082173611
AUROC: 0.8312185462112899
AUPRC: 0.6831163547303335
Sensitivity: 0.7726692209450831
Specificity: 0.7367337602927722
Threshold: 0.11
Accuracy:  0.7462108454024924

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0004.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5940951388329268
AUROC: 0.8407524018267449
AUPRC: 0.7064143577674881
Sensitivity: 0.7580372250423012
Specificity: 0.7693389592123769
Threshold: 0.13
Accuracy:  0.7660208643815202

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.44it/s]
Loss: 0.5517927731605287
AUROC: 0.8317726207293832
AUPRC: 0.683451475658985
Sensitivity: 0.7420178799489144
Specificity: 0.7742451967063129
Threshold: 0.13
Accuracy:  0.7657460424385315

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0005.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.37it/s]
Loss: 0.5742961578071117
AUROC: 0.8404930021584908
AUPRC: 0.704582092854102
Sensitivity: 0.7783417935702199
Specificity: 0.750351617440225
Threshold: 0.14
Accuracy:  0.7585692995529061

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.534679466105522
AUROC: 0.8317228876666678
AUPRC: 0.6834381321264452
Sensitivity: 0.7554278416347382
Specificity: 0.760064043915828
Threshold: 0.14
Accuracy:  0.7588413607275177

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0006.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.6564258597791195
AUROC: 0.8394101870295406
AUPRC: 0.7056579490710868
Sensitivity: 0.7631133671742809
Specificity: 0.7651195499296765
Threshold: 0.09
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.6070538549981219
AUROC: 0.8308209738274098
AUPRC: 0.6829960226585959
Sensitivity: 0.7452107279693486
Specificity: 0.7698993595608418
Threshold: 0.09
Accuracy:  0.7633883462445268

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0007.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
Loss: 0.5692170225083828
AUROC: 0.8402431217441177
AUPRC: 0.7038471928119712
Sensitivity: 0.7478849407783418
Specificity: 0.7834036568213784
Threshold: 0.15
Accuracy:  0.7729756582215599

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.5291623019791664
AUROC: 0.831497810868887
AUPRC: 0.6833750232529393
Sensitivity: 0.7630906768837803
Specificity: 0.7477127172918573
Threshold: 0.14
Accuracy:  0.7517682721455036

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0008.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.6236552707850933
AUROC: 0.8391763703560915
AUPRC: 0.7044671231131863
Sensitivity: 0.7766497461928934
Specificity: 0.749648382559775
Threshold: 0.1
Accuracy:  0.7575757575757576

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.44it/s]
Loss: 0.5792241727418088
AUROC: 0.8306445346504343
AUPRC: 0.6829046047008314
Sensitivity: 0.7586206896551724
Specificity: 0.7566331198536139
Threshold: 0.1
Accuracy:  0.7571572920175144

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0009.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
Loss: 0.5968421306461096
AUROC: 0.8393911485217789
AUPRC: 0.7041160134988749
Sensitivity: 0.7698815566835872
Specificity: 0.760196905766526
Threshold: 0.12
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.5532494792912869
AUROC: 0.8308542752614747
AUPRC: 0.6832738028900983
Sensitivity: 0.7515964240102171
Specificity: 0.7614364135407137
Threshold: 0.12
Accuracy:  0.7588413607275177

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0010.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.6161208041012287
AUROC: 0.8387938153407537
AUPRC: 0.7041079212307266
Sensitivity: 0.7563451776649747
Specificity: 0.7721518987341772
Threshold: 0.11
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.47it/s]
Loss: 0.5726136793481543
AUROC: 0.8303502989534002
AUPRC: 0.6827824678287184
Sensitivity: 0.7388250319284803
Specificity: 0.7747026532479414
Threshold: 0.11
Accuracy:  0.7652408218255304

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0011.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5514347311109304
AUROC: 0.8394958603144685
AUPRC: 0.7033742320670903
Sensitivity: 0.7715736040609137
Specificity: 0.7616033755274262
Threshold: 0.15
Accuracy:  0.7645305514157973

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.516109178357936
AUROC: 0.8310816159725363
AUPRC: 0.683927249202195
Sensitivity: 0.7515964240102171
Specificity: 0.7641811527904849
Threshold: 0.15
Accuracy:  0.7608622431795218

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0012.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5350820776075125
AUROC: 0.8394149466564811
AUPRC: 0.7035644601060528
Sensitivity: 0.7715736040609137
Specificity: 0.7623066104078763
Threshold: 0.16
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.5017134166778402
AUROC: 0.8312100017643917
AUPRC: 0.6842993356479792
Sensitivity: 0.7541507024265645
Specificity: 0.7632662397072278
Threshold: 0.16
Accuracy:  0.7608622431795218

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0013.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
Loss: 0.5440880209207535
AUROC: 0.8389949095789871
AUPRC: 0.7029743916109649
Sensitivity: 0.7461928934010152
Specificity: 0.7819971870604782
Threshold: 0.17
Accuracy:  0.771485345255837

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.40it/s]
Loss: 0.5102408242986557
AUROC: 0.8304462596647187
AUPRC: 0.6829956639507963
Sensitivity: 0.7611749680715197
Specificity: 0.7527447392497713
Threshold: 0.16
Accuracy:  0.7549680026945099

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0014.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5873254425823689
AUROC: 0.8389092362940593
AUPRC: 0.7042805219386186
Sensitivity: 0.766497461928934
Specificity: 0.7665260196905767
Threshold: 0.12
Accuracy:  0.7665176353700944

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.5447619322766649
AUROC: 0.830197813439524
AUPRC: 0.6823315596297278
Sensitivity: 0.743933588761175
Specificity: 0.7692131747483989
Threshold: 0.12
Accuracy:  0.762546311889525

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0015.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
Loss: 0.5681462902575731
AUROC: 0.8376991011444522
AUPRC: 0.7044093513993106
Sensitivity: 0.7614213197969543
Specificity: 0.7609001406469761
Threshold: 0.13
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.47it/s]
Loss: 0.5289706720950755
AUROC: 0.8290219660932977
AUPRC: 0.6803869417244626
Sensitivity: 0.7452107279693486
Specificity: 0.7660109789569991
Threshold: 0.13
Accuracy:  0.7605254294375211

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0016.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.519069941714406
AUROC: 0.8370196643987046
AUPRC: 0.7053946556986351
Sensitivity: 0.7597292724196277
Specificity: 0.7623066104078763
Threshold: 0.19
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.4880260818816246
AUROC: 0.8291724067822752
AUPRC: 0.6816363378035322
Sensitivity: 0.7637292464878672
Specificity: 0.7447392497712717
Threshold: 0.18
Accuracy:  0.7497473896934995

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0017.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5532096643000841
AUROC: 0.8368875847511072
AUPRC: 0.7043555252483062
Sensitivity: 0.7631133671742809
Specificity: 0.7538677918424754
Threshold: 0.15
Accuracy:  0.756582215598609

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.5171721771042398
AUROC: 0.8281808127653161
AUPRC: 0.6785273612252315
Sensitivity: 0.7413793103448276
Specificity: 0.7648673376029277
Threshold: 0.15
Accuracy:  0.7586729538565173

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0018.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.34it/s]
Loss: 0.5613618735224009
AUROC: 0.8370625010411684
AUPRC: 0.704885237067512
Sensitivity: 0.7563451776649747
Specificity: 0.7693389592123769
Threshold: 0.14
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.44it/s]
Loss: 0.525749238247567
AUROC: 0.8285725427923428
AUPRC: 0.6801555245119225
Sensitivity: 0.7598978288633461
Specificity: 0.7506861848124429
Threshold: 0.13
Accuracy:  0.7531155271135063

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0019.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5188658274710178
AUROC: 0.8350943953012964
AUPRC: 0.7026487606042147
Sensitivity: 0.751269035532995
Specificity: 0.7679324894514767
Threshold: 0.18
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.47it/s]
Loss: 0.4918559006554015
AUROC: 0.8274071386589921
AUPRC: 0.6789096994515864
Sensitivity: 0.7528735632183908
Specificity: 0.7527447392497713
Threshold: 0.17
Accuracy:  0.7527787133715056

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0020.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5142453238368034
AUROC: 0.8375795155175738
AUPRC: 0.7049730581118439
Sensitivity: 0.7580372250423012
Specificity: 0.7644163150492265
Threshold: 0.19
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.48404780981388495
AUROC: 0.8293459247808239
AUPRC: 0.6808275994180973
Sensitivity: 0.7586206896551724
Specificity: 0.7516010978957
Threshold: 0.18
Accuracy:  0.7534523408555069

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0021.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
Loss: 0.5162491519004107
AUROC: 0.8369042434453988
AUPRC: 0.7040629878105666
Sensitivity: 0.7614213197969543
Specificity: 0.7630098452883263
Threshold: 0.19
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.48it/s]
Loss: 0.48893056683083796
AUROC: 0.8286354211579785
AUPRC: 0.6798497641508778
Sensitivity: 0.7567049808429118
Specificity: 0.7504574565416285
Threshold: 0.18
Accuracy:  0.7521050858875042

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0022.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
Loss: 0.5080960690975189
AUROC: 0.8388414116101581
AUPRC: 0.7060593897348477
Sensitivity: 0.7648054145516074
Specificity: 0.7609001406469761
Threshold: 0.19
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.44it/s]
Loss: 0.4817239312415427
AUROC: 0.8306439504147489
AUPRC: 0.6810836335476239
Sensitivity: 0.7630906768837803
Specificity: 0.7483989021043
Threshold: 0.18
Accuracy:  0.7522734927585045

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0023.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.4947072435170412
AUROC: 0.8388580703044495
AUPRC: 0.7057978051542618
Sensitivity: 0.7648054145516074
Specificity: 0.7524613220815752
Threshold: 0.2
Accuracy:  0.7560854446100348

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.48it/s]
Loss: 0.47109461211143655
AUROC: 0.8306481130940071
AUPRC: 0.6819940680024248
Sensitivity: 0.7471264367816092
Specificity: 0.7618938700823422
Threshold: 0.2
Accuracy:  0.757999326372516

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0024.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
Loss: 0.5234097335487604
AUROC: 0.8363711652280695
AUPRC: 0.7045863846537596
Sensitivity: 0.754653130287648
Specificity: 0.7651195499296765
Threshold: 0.17
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.49360242549409256
AUROC: 0.8285075465723477
AUPRC: 0.6788401543675835
Sensitivity: 0.7573435504469987
Specificity: 0.7511436413540714
Threshold: 0.16
Accuracy:  0.7527787133715056

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0025.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.5094730295240879
AUROC: 0.8389627820971391
AUPRC: 0.7056993690610514
Sensitivity: 0.766497461928934
Specificity: 0.7566807313642757
Threshold: 0.18
Accuracy:  0.7595628415300546

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.47it/s]
Loss: 0.48271741607087726
AUROC: 0.8306619156620735
AUPRC: 0.6814831171840157
Sensitivity: 0.7618135376756067
Specificity: 0.7454254345837146
Threshold: 0.17
Accuracy:  0.7497473896934995

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0026.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.5035305581986904
AUROC: 0.839015137993484
AUPRC: 0.7069965072751323
Sensitivity: 0.7715736040609137
Specificity: 0.7475386779184248
Threshold: 0.18
Accuracy:  0.754595131644312

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.47694036269441564
AUROC: 0.8308842173403488
AUPRC: 0.6823827612693893
Sensitivity: 0.7484035759897829
Specificity: 0.7593778591033852
Threshold: 0.18
Accuracy:  0.756483664533513

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0027.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
Loss: 0.5242832098156214
AUROC: 0.8413925716502341
AUPRC: 0.7086349613874596
Sensitivity: 0.7698815566835872
Specificity: 0.7566807313642757
Threshold: 0.17
Accuracy:  0.7605563835072032

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.4929015912274097
AUROC: 0.8326789163362814
AUPRC: 0.6820716197591246
Sensitivity: 0.7509578544061303
Specificity: 0.760064043915828
Threshold: 0.17
Accuracy:  0.7576625126305153

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0028.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5142343621701002
AUROC: 0.8410665372048138
AUPRC: 0.7080812406099234
Sensitivity: 0.7614213197969543
Specificity: 0.7693389592123769
Threshold: 0.18
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.4871461077573452
AUROC: 0.8321474809509954
AUPRC: 0.6820822660128202
Sensitivity: 0.764367816091954
Specificity: 0.75
Threshold: 0.17
Accuracy:  0.7537891545975076

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0029.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5058958362787962
AUROC: 0.8394339851642427
AUPRC: 0.7061022447187464
Sensitivity: 0.7631133671742809
Specificity: 0.7686357243319268
Threshold: 0.18
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.47it/s]
Loss: 0.4813514186980877
AUROC: 0.8314029455994785
AUPRC: 0.6822950856256901
Sensitivity: 0.7567049808429118
Specificity: 0.7543458371454712
Threshold: 0.17
Accuracy:  0.7549680026945099

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0030.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.5346809048205614
AUROC: 0.8417031373080979
AUPRC: 0.7055967333180252
Sensitivity: 0.7580372250423012
Specificity: 0.7714486638537271
Threshold: 0.16
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.5056975176359745
AUROC: 0.8326196164142183
AUPRC: 0.6811508090171406
Sensitivity: 0.7541507024265645
Specificity: 0.757548032936871
Threshold: 0.15
Accuracy:  0.7566520714045133

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0031.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
Loss: 0.48096016235649586
AUROC: 0.8446814738660783
AUPRC: 0.7050476774747803
Sensitivity: 0.7749576988155669
Specificity: 0.7531645569620253
Threshold: 0.22
Accuracy:  0.7595628415300546

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.4584820007390164
AUROC: 0.8352377225791903
AUPRC: 0.6860546808056339
Sensitivity: 0.7547892720306514
Specificity: 0.7612076852698993
Threshold: 0.22
Accuracy:  0.7595149882115191

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0032.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
Loss: 0.5011829994618893
AUROC: 0.8394280356305672
AUPRC: 0.7074769923922323
Sensitivity: 0.7698815566835872
Specificity: 0.7587904360056259
Threshold: 0.18
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.4758994655406221
AUROC: 0.8315696718581849
AUPRC: 0.6820213557510154
Sensitivity: 0.7637292464878672
Specificity: 0.7472552607502287
Threshold: 0.17
Accuracy:  0.7515998652745032

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0033.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
Loss: 0.5391718428581953
AUROC: 0.8371826816214145
AUPRC: 0.7058340168790442
Sensitivity: 0.7698815566835872
Specificity: 0.7475386779184248
Threshold: 0.14
Accuracy:  0.7540983606557377

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.48it/s]
Loss: 0.5078388316200134
AUROC: 0.8286270957994623
AUPRC: 0.6782614947785478
Sensitivity: 0.7496807151979565
Specificity: 0.7591491308325709
Threshold: 0.14
Accuracy:  0.7566520714045133

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0034.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
Loss: 0.49868632294237614
AUROC: 0.8397754883972194
AUPRC: 0.7026420040811019
Sensitivity: 0.7715736040609137
Specificity: 0.7573839662447257
Threshold: 0.21
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.39it/s]
Loss: 0.472651832598321
AUROC: 0.8307552473128081
AUPRC: 0.6816848261495785
Sensitivity: 0.7630906768837803
Specificity: 0.7422232387923148
Threshold: 0.2
Accuracy:  0.7477265072414955

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0035.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.19it/s]
Loss: 0.5423145927488804
AUROC: 0.8429418302193474
AUPRC: 0.7071502356791152
Sensitivity: 0.7681895093062606
Specificity: 0.760196905766526
Threshold: 0.13
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.48it/s]
Loss: 0.513901248257211
AUROC: 0.8338292033712738
AUPRC: 0.6823066259058962
Sensitivity: 0.7637292464878672
Specificity: 0.7483989021043
Threshold: 0.12
Accuracy:  0.7524418996295049

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0036.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
Loss: 0.49805725924670696
AUROC: 0.8414972834429237
AUPRC: 0.707693625717642
Sensitivity: 0.7681895093062606
Specificity: 0.7587904360056259
Threshold: 0.19
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.4716254104959204
AUROC: 0.8333546579358486
AUPRC: 0.685660012365035
Sensitivity: 0.7650063856960408
Specificity: 0.744967978042086
Threshold: 0.18
Accuracy:  0.7502526103065005

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0037.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
Loss: 0.5064173527061939
AUROC: 0.8453644803320315
AUPRC: 0.7059463293033333
Sensitivity: 0.7631133671742809
Specificity: 0.7658227848101266
Threshold: 0.19
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.48it/s]
Loss: 0.47972348268995896
AUROC: 0.8356295256356777
AUPRC: 0.685394209264836
Sensitivity: 0.7637292464878672
Specificity: 0.7543458371454712
Threshold: 0.18
Accuracy:  0.7568204782755137

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0038.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.575722461566329
AUROC: 0.8413533047279753
AUPRC: 0.7073445873686073
Sensitivity: 0.7766497461928934
Specificity: 0.7531645569620253
Threshold: 0.1
Accuracy:  0.7600596125186289

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.44it/s]
Loss: 0.5419714999325732
AUROC: 0.8321593117236239
AUPRC: 0.6788181006661694
Sensitivity: 0.7541507024265645
Specificity: 0.7621225983531564
Threshold: 0.1
Accuracy:  0.76002020882452

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0039.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.4797915071249008
AUROC: 0.841704327214833
AUPRC: 0.7084621671200648
Sensitivity: 0.7580372250423012
Specificity: 0.770745428973277
Threshold: 0.21
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.45832491523407876
AUROC: 0.832677236658686
AUPRC: 0.6832697161535572
Sensitivity: 0.7618135376756067
Specificity: 0.7538883806038427
Threshold: 0.2
Accuracy:  0.7559784439205119

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0040.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.4930574782192707
AUROC: 0.8419292195877687
AUPRC: 0.7073819036654796
Sensitivity: 0.7529610829103215
Specificity: 0.7728551336146273
Threshold: 0.19
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.43it/s]
Loss: 0.47016886003473973
AUROC: 0.8326820566030901
AUPRC: 0.6820655118303858
Sensitivity: 0.7528735632183908
Specificity: 0.7598353156450137
Threshold: 0.18
Accuracy:  0.757999326372516

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0041.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.5325520168989897
AUROC: 0.8378490293930762
AUPRC: 0.705723247478523
Sensitivity: 0.766497461928934
Specificity: 0.7433192686357243
Threshold: 0.17
Accuracy:  0.7501241927471436

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.47it/s]
Loss: 0.5023674105710172
AUROC: 0.8294883322291279
AUPRC: 0.6786440985916022
Sensitivity: 0.7509578544061303
Specificity: 0.7580054894784996
Threshold: 0.17
Accuracy:  0.7561468507915123

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0042.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
Loss: 0.554601289331913
AUROC: 0.845206222736262
AUPRC: 0.7097049044311111
Sensitivity: 0.7648054145516074
Specificity: 0.7658227848101266
Threshold: 0.14
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.5228794708530954
AUROC: 0.8345641718634431
AUPRC: 0.6815229727601316
Sensitivity: 0.7650063856960408
Specificity: 0.7504574565416285
Threshold: 0.13
Accuracy:  0.7542943752105086

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0043.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
Loss: 0.5608969144523144
AUROC: 0.8402062346353294
AUPRC: 0.7096553844470351
Sensitivity: 0.7681895093062606
Specificity: 0.7580872011251758
Threshold: 0.12
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.47it/s]
Loss: 0.523226298867388
AUROC: 0.8325717821174804
AUPRC: 0.680505978780476
Sensitivity: 0.7522349936143039
Specificity: 0.7655535224153706
Threshold: 0.12
Accuracy:  0.7620410912765241

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0044.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.48537755012512207
AUROC: 0.8381560253307345
AUPRC: 0.7048572847829768
Sensitivity: 0.7614213197969543
Specificity: 0.7637130801687764
Threshold: 0.17
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.46247512133831675
AUROC: 0.8297294755082558
AUPRC: 0.6786964241779699
Sensitivity: 0.7586206896551724
Specificity: 0.7502287282708143
Threshold: 0.16
Accuracy:  0.7524418996295049

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0045.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.37it/s]
Loss: 0.4832758419215679
AUROC: 0.8408725824069909
AUPRC: 0.707396782329009
Sensitivity: 0.7563451776649747
Specificity: 0.7721518987341772
Threshold: 0.2
Accuracy:  0.767511177347243

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.47it/s]
Loss: 0.45803958114157334
AUROC: 0.8323648166259454
AUPRC: 0.6808035160010159
Sensitivity: 0.756066411238825
Specificity: 0.7593778591033852
Threshold: 0.19
Accuracy:  0.7585045469855171

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0046.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.5162584204226732
AUROC: 0.8416745795464552
AUPRC: 0.7104356636696156
Sensitivity: 0.7580372250423012
Specificity: 0.770042194092827
Threshold: 0.16
Accuracy:  0.7665176353700944

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.4856717608710553
AUROC: 0.8334608427716608
AUPRC: 0.6834573405103614
Sensitivity: 0.7541507024265645
Specificity: 0.7614364135407137
Threshold: 0.15
Accuracy:  0.7595149882115191

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0047.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.22it/s]
Loss: 0.4724709130823612
AUROC: 0.8400967632156993
AUPRC: 0.7084837544106924
Sensitivity: 0.7732656514382402
Specificity: 0.7531645569620253
Threshold: 0.19
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.47it/s]
Loss: 0.4485265349454068
AUROC: 0.8322228473544055
AUPRC: 0.6815161876211056
Sensitivity: 0.7528735632183908
Specificity: 0.7598353156450137
Threshold: 0.19
Accuracy:  0.757999326372516

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0048.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.521317396312952
AUROC: 0.8396945747392319
AUPRC: 0.707433888955646
Sensitivity: 0.7715736040609137
Specificity: 0.7538677918424754
Threshold: 0.15
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.49153743589178045
AUROC: 0.8321552220738264
AUPRC: 0.6795098330261797
Sensitivity: 0.7503192848020435
Specificity: 0.7653247941445562
Threshold: 0.15
Accuracy:  0.7613674637925227

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0049.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
Loss: 0.5081488862633705
AUROC: 0.8396933848324969
AUPRC: 0.7063930619888897
Sensitivity: 0.754653130287648
Specificity: 0.7742616033755274
Threshold: 0.16
Accuracy:  0.7685047193243915

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.4798665794920414
AUROC: 0.8307171259343389
AUPRC: 0.678400946834389
Sensitivity: 0.7490421455938697
Specificity: 0.760064043915828
Threshold: 0.15
Accuracy:  0.7571572920175144

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0050.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.57800018414855
AUROC: 0.8414592064274002
AUPRC: 0.7089681923795803
Sensitivity: 0.766497461928934
Specificity: 0.7538677918424754
Threshold: 0.09
Accuracy:  0.7575757575757576

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.5434450586425498
AUROC: 0.8325614119340656
AUPRC: 0.6790394796057262
Sensitivity: 0.7535121328224776
Specificity: 0.7664684354986276
Threshold: 0.09
Accuracy:  0.7630515325025261

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0051.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.40it/s]
Loss: 0.4556660782545805
AUROC: 0.8398980487909357
AUPRC: 0.70722208040015
Sensitivity: 0.7614213197969543
Specificity: 0.7552742616033755
Threshold: 0.21
Accuracy:  0.7570789865871833

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.48it/s]
Loss: 0.43417225334238496
AUROC: 0.8319166348258218
AUPRC: 0.680478028158611
Sensitivity: 0.7496807151979565
Specificity: 0.7593778591033852
Threshold: 0.21
Accuracy:  0.7568204782755137

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0052.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.41it/s]
Loss: 0.4833982177078724
AUROC: 0.8446826637728135
AUPRC: 0.7091710743643725
Sensitivity: 0.7580372250423012
Specificity: 0.770745428973277
Threshold: 0.19
Accuracy:  0.7670144063586687

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.459533118504159
AUROC: 0.835270804924873
AUPRC: 0.6836871633200879
Sensitivity: 0.7503192848020435
Specificity: 0.7630375114364135
Threshold: 0.18
Accuracy:  0.7596833950825194

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0053.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
Loss: 0.4993793126195669
AUROC: 0.8436379256593868
AUPRC: 0.7122931860662276
Sensitivity: 0.7614213197969543
Specificity: 0.7686357243319268
Threshold: 0.17
Accuracy:  0.7665176353700944

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.46it/s]
Loss: 0.47299535096959866
AUROC: 0.8356024317057695
AUPRC: 0.6863296701452255
Sensitivity: 0.7598978288633461
Specificity: 0.755032021957914
Threshold: 0.16
Accuracy:  0.7563152576625126

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0054.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.35it/s]
Loss: 0.4734979048371315
AUROC: 0.8426360241884241
AUPRC: 0.7079988899956231
Sensitivity: 0.754653130287648
Specificity: 0.7665260196905767
Threshold: 0.18
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.44it/s]
Loss: 0.45583561696904773
AUROC: 0.832590623718333
AUPRC: 0.6819915862767056
Sensitivity: 0.7637292464878672
Specificity: 0.7493138151875571
Threshold: 0.16
Accuracy:  0.7531155271135063

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0055.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.4698951430618763
AUROC: 0.8457726183421743
AUPRC: 0.7114643632772036
Sensitivity: 0.7681895093062606
Specificity: 0.7580872011251758
Threshold: 0.19
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.47it/s]
Loss: 0.4457652121782303
AUROC: 0.8362696288584385
AUPRC: 0.6833504458231858
Sensitivity: 0.7637292464878672
Specificity: 0.755032021957914
Threshold: 0.18
Accuracy:  0.7573256988885146

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0056.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.43it/s]
Loss: 0.45753935910761356
AUROC: 0.8405286993605441
AUPRC: 0.7070210910127827
Sensitivity: 0.7580372250423012
Specificity: 0.7623066104078763
Threshold: 0.23
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.4405457944946086
AUROC: 0.8307662017319083
AUPRC: 0.679781029351886
Sensitivity: 0.7554278416347382
Specificity: 0.7511436413540714
Threshold: 0.22
Accuracy:  0.7522734927585045

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0057.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.5403839480131865
AUROC: 0.84394373169031
AUPRC: 0.7108130102202416
Sensitivity: 0.7631133671742809
Specificity: 0.7658227848101266
Threshold: 0.09
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:18<00:00,  2.47it/s]
Loss: 0.5145268611451412
AUROC: 0.8348516158206349
AUPRC: 0.6821414582612025
Sensitivity: 0.7650063856960408
Specificity: 0.7534309240622141
Threshold: 0.08
Accuracy:  0.756483664533513

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0058.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
Loss: 0.4807884432375431
AUROC: 0.8451395879590957
AUPRC: 0.711068079792909
Sensitivity: 0.7631133671742809
Specificity: 0.7616033755274262
Threshold: 0.14
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.43it/s]
Loss: 0.4623507042514517
AUROC: 0.8345381733754451
AUPRC: 0.6833101223512522
Sensitivity: 0.756066411238825
Specificity: 0.7561756633119854
Threshold: 0.13
Accuracy:  0.7561468507915123

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0059.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.42it/s]
Loss: 0.4840606413781643
AUROC: 0.846599603523076
AUPRC: 0.7125896895953701
Sensitivity: 0.7715736040609137
Specificity: 0.750351617440225
Threshold: 0.2
Accuracy:  0.756582215598609

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:21<00:00,  2.23it/s]
Loss: 0.45982681975719775
AUROC: 0.8360457935614892
AUPRC: 0.6843860371919481
Sensitivity: 0.7528735632183908
Specificity: 0.760064043915828
Threshold: 0.2
Accuracy:  0.7581677332435164

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0060.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.23it/s]
Loss: 0.474739545956254
AUROC: 0.8411676792772983
AUPRC: 0.7079843851950267
Sensitivity: 0.7648054145516074
Specificity: 0.7552742616033755
Threshold: 0.17
Accuracy:  0.7580725285643318

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.37it/s]
Loss: 0.44909675863194976
AUROC: 0.8334246201591692
AUPRC: 0.6822294326394251
Sensitivity: 0.7598978288633461
Specificity: 0.7516010978957
Threshold: 0.16
Accuracy:  0.7537891545975076

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0061.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.31it/s]
Loss: 0.5767617262899876
AUROC: 0.844449442052732
AUPRC: 0.7121765788090851
Sensitivity: 0.7698815566835872
Specificity: 0.7566807313642757
Threshold: 0.07
Accuracy:  0.7605563835072032

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.41it/s]
Loss: 0.544497310164127
AUROC: 0.8355040610222488
AUPRC: 0.684533058035209
Sensitivity: 0.7541507024265645
Specificity: 0.7653247941445562
Threshold: 0.07
Accuracy:  0.7623779050185248

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0062.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.26it/s]
Loss: 0.4699559174478054
AUROC: 0.846385420310756
AUPRC: 0.7119827707098078
Sensitivity: 0.7631133671742809
Specificity: 0.7679324894514767
Threshold: 0.2
Accuracy:  0.7665176353700944

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.37it/s]
Loss: 0.4477721002507717
AUROC: 0.8364858690914785
AUPRC: 0.6846902148669936
Sensitivity: 0.7605363984674329
Specificity: 0.7513723696248856
Threshold: 0.19
Accuracy:  0.7537891545975076

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0063.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.26it/s]
Loss: 0.4863651189953089
AUROC: 0.8413878120232936
AUPRC: 0.7091803601331601
Sensitivity: 0.766497461928934
Specificity: 0.7566807313642757
Threshold: 0.16
Accuracy:  0.7595628415300546

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.39it/s]
Loss: 0.45911494911985196
AUROC: 0.833014924884818
AUPRC: 0.6817131084208157
Sensitivity: 0.7586206896551724
Specificity: 0.7509149130832571
Threshold: 0.15
Accuracy:  0.7529471202425059

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0064.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.38it/s]
Loss: 0.4991271011531353
AUROC: 0.8460689051192167
AUPRC: 0.7137303232563256
Sensitivity: 0.7563451776649747
Specificity: 0.770042194092827
Threshold: 0.16
Accuracy:  0.7660208643815202

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.40it/s]
Loss: 0.4718013608709295
AUROC: 0.8359225928613411
AUPRC: 0.6834650582136376
Sensitivity: 0.7541507024265645
Specificity: 0.7591491308325709
Threshold: 0.15
Accuracy:  0.7578309195015157

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0065.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.34it/s]
Loss: 0.526051489636302
AUROC: 0.843661723794089
AUPRC: 0.7097813179427649
Sensitivity: 0.7631133671742809
Specificity: 0.7616033755274262
Threshold: 0.11
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.39it/s]
Loss: 0.5011735831169372
AUROC: 0.8344340333645316
AUPRC: 0.6814708597728822
Sensitivity: 0.7541507024265645
Specificity: 0.755946935041171
Threshold: 0.1
Accuracy:  0.755473223307511

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0066.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.31it/s]
Loss: 0.45750696770846844
AUROC: 0.8414235092253468
AUPRC: 0.7068817373815293
Sensitivity: 0.7563451776649747
Specificity: 0.7573839662447257
Threshold: 0.21
Accuracy:  0.7570789865871833

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.36it/s]
Loss: 0.43840798545391
AUROC: 0.8321571208398036
AUPRC: 0.6810415564292484
Sensitivity: 0.7522349936143039
Specificity: 0.7566331198536139
Threshold: 0.2
Accuracy:  0.755473223307511

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0067.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.25it/s]
Loss: 0.48794314451515675
AUROC: 0.843067960333269
AUPRC: 0.7090349341135976
Sensitivity: 0.7648054145516074
Specificity: 0.7552742616033755
Threshold: 0.17
Accuracy:  0.7580725285643318

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.41it/s]
Loss: 0.46417428457990606
AUROC: 0.8343201074058885
AUPRC: 0.6808320131234928
Sensitivity: 0.7535121328224776
Specificity: 0.7566331198536139
Threshold: 0.16
Accuracy:  0.7558100370495117

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0068.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.38it/s]
Loss: 0.45645890571177006
AUROC: 0.8416472116915477
AUPRC: 0.7058238692583959
Sensitivity: 0.7614213197969543
Specificity: 0.7637130801687764
Threshold: 0.2
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.45it/s]
Loss: 0.4361728680260638
AUROC: 0.833089195846318
AUPRC: 0.6800079643130094
Sensitivity: 0.7592592592592593
Specificity: 0.7488563586459286
Threshold: 0.19
Accuracy:  0.7515998652745032

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0069.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.24it/s]
Loss: 0.44081423431634903
AUROC: 0.8433392590688742
AUPRC: 0.7086050516800242
Sensitivity: 0.751269035532995
Specificity: 0.7714486638537271
Threshold: 0.28
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:22<00:00,  2.13it/s]
Loss: 0.4253850886796383
AUROC: 0.8332990094868191
AUPRC: 0.6792660214124837
Sensitivity: 0.7579821200510856
Specificity: 0.7598353156450137
Threshold: 0.27
Accuracy:  0.7593465813405187

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0070.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.39it/s]
Loss: 0.4406323153525591
AUROC: 0.8456726661764251
AUPRC: 0.7115977976844584
Sensitivity: 0.7597292724196277
Specificity: 0.7756680731364276
Threshold: 0.25
Accuracy:  0.7709885742672627

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:20<00:00,  2.31it/s]
Loss: 0.42616366547472934
AUROC: 0.8350710693499442
AUPRC: 0.6834741759529944
Sensitivity: 0.7592592592592593
Specificity: 0.755946935041171
Threshold: 0.24
Accuracy:  0.7568204782755137

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0071.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.27it/s]
Loss: 0.44906036369502544
AUROC: 0.842364725452819
AUPRC: 0.7082493874086826
Sensitivity: 0.766497461928934
Specificity: 0.7623066104078763
Threshold: 0.24
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.38it/s]
Loss: 0.43045132527960106
AUROC: 0.8330014874640549
AUPRC: 0.6791155052718462
Sensitivity: 0.7503192848020435
Specificity: 0.7671546203110704
Threshold: 0.24
Accuracy:  0.7627147187605254

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0072.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.38it/s]
Loss: 0.439900541678071
AUROC: 0.8419910947379945
AUPRC: 0.708820711496917
Sensitivity: 0.7648054145516074
Specificity: 0.7623066104078763
Threshold: 0.29
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.40it/s]
Loss: 0.42678046385024454
AUROC: 0.8322519130797517
AUPRC: 0.6798376367268799
Sensitivity: 0.7624521072796935
Specificity: 0.7543458371454712
Threshold: 0.28
Accuracy:  0.756483664533513

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0073.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.34it/s]
Loss: 0.44473424553871155
AUROC: 0.8462747589843908
AUPRC: 0.7115755406915806
Sensitivity: 0.7715736040609137
Specificity: 0.7580872011251758
Threshold: 0.23
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.42it/s]
Loss: 0.4323716360203763
AUROC: 0.8344154838815216
AUPRC: 0.6835086551128877
Sensitivity: 0.7605363984674329
Specificity: 0.7488563586459286
Threshold: 0.22
Accuracy:  0.7519366790165039

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0074.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.36it/s]
Loss: 0.5358517654240131
AUROC: 0.8419577773494112
AUPRC: 0.7086412523917373
Sensitivity: 0.7529610829103215
Specificity: 0.7679324894514767
Threshold: 0.13
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.40it/s]
Loss: 0.505840374117202
AUROC: 0.8328262167584501
AUPRC: 0.6791894860461659
Sensitivity: 0.7509578544061303
Specificity: 0.7607502287282708
Threshold: 0.12
Accuracy:  0.7581677332435164

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0075.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.26it/s]
Loss: 0.5110163055360317
AUROC: 0.8382833453513913
AUPRC: 0.7068565437724962
Sensitivity: 0.7563451776649747
Specificity: 0.7665260196905767
Threshold: 0.15
Accuracy:  0.7635370094386488

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.38it/s]
Loss: 0.4821821612880585
AUROC: 0.830950089913872
AUPRC: 0.6800471514408521
Sensitivity: 0.7573435504469987
Specificity: 0.7538883806038427
Threshold: 0.14
Accuracy:  0.7547995958235096

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0076.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.23it/s]
Loss: 0.47440873458981514
AUROC: 0.8484094516671783
AUPRC: 0.7159446214807926
Sensitivity: 0.7529610829103215
Specificity: 0.7714486638537271
Threshold: 0.18
Accuracy:  0.7660208643815202

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.40it/s]
Loss: 0.4511975791860134
AUROC: 0.8375011246536943
AUPRC: 0.686160393352426
Sensitivity: 0.7515964240102171
Specificity: 0.7634949679780421
Threshold: 0.17
Accuracy:  0.7603570225665207

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0077.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.38it/s]
Loss: 0.4545825235545635
AUROC: 0.845726211979505
AUPRC: 0.7069733227302313
Sensitivity: 0.766497461928934
Specificity: 0.750351617440225
Threshold: 0.23
Accuracy:  0.7550919026328863

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.37it/s]
Loss: 0.43589073007411144
AUROC: 0.835840142600246
AUPRC: 0.6822300519666218
Sensitivity: 0.7586206896551724
Specificity: 0.7593778591033852
Threshold: 0.23
Accuracy:  0.7591781744695184

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0078.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.31it/s]
Loss: 0.4996599778532982
AUROC: 0.8460558161451304
AUPRC: 0.7144023526801707
Sensitivity: 0.7648054145516074
Specificity: 0.7637130801687764
Threshold: 0.15
Accuracy:  0.764033780427223

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.38it/s]
Loss: 0.47464648459820036
AUROC: 0.8364906890358825
AUPRC: 0.684964886132783
Sensitivity: 0.7586206896551724
Specificity: 0.7518298261665142
Threshold: 0.14
Accuracy:  0.7536207477265072

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0079.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.33it/s]
Loss: 0.48831004835665226
AUROC: 0.8432785738253837
AUPRC: 0.7107406037606439
Sensitivity: 0.7648054145516074
Specificity: 0.7693389592123769
Threshold: 0.15
Accuracy:  0.7680079483358172

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.41it/s]
Loss: 0.4647266363843958
AUROC: 0.8341324216919699
AUPRC: 0.6851365686023013
Sensitivity: 0.7484035759897829
Specificity: 0.760064043915828
Threshold: 0.14
Accuracy:  0.756988885146514

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0080.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.14it/s]
Loss: 0.5158652979880571
AUROC: 0.8478739936363787
AUPRC: 0.7146477308293157
Sensitivity: 0.7732656514382402
Specificity: 0.7559774964838256
Threshold: 0.11
Accuracy:  0.7610531544957775

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:20<00:00,  2.34it/s]
Loss: 0.4893920573782414
AUROC: 0.8373852999290738
AUPRC: 0.686913052229716
Sensitivity: 0.7503192848020435
Specificity: 0.7639524245196706
Threshold: 0.11
Accuracy:  0.7603570225665207

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0081.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.32it/s]
Loss: 0.449527969583869
AUROC: 0.8429477797530229
AUPRC: 0.7090389797195107
Sensitivity: 0.7698815566835872
Specificity: 0.759493670886076
Threshold: 0.24
Accuracy:  0.7625434674615003

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.39it/s]
Loss: 0.43220408292526896
AUROC: 0.8331538269190099
AUPRC: 0.6830916173855911
Sensitivity: 0.756066411238825
Specificity: 0.7586916742909423
Threshold: 0.24
Accuracy:  0.757999326372516

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0082.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:07<00:00,  2.20it/s]
Loss: 0.44343325681984425
AUROC: 0.8421255541990618
AUPRC: 0.7110481407027957
Sensitivity: 0.7715736040609137
Specificity: 0.7630098452883263
Threshold: 0.26
Accuracy:  0.7655240933929458

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:20<00:00,  2.31it/s]
Loss: 0.427119587013062
AUROC: 0.8336986266955979
AUPRC: 0.6838244107678304
Sensitivity: 0.7496807151979565
Specificity: 0.7646386093321135
Threshold: 0.26
Accuracy:  0.7606938363085214

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0083.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.38it/s]
Loss: 0.4844377413392067
AUROC: 0.8427109883127359
AUPRC: 0.7098297866743506
Sensitivity: 0.7597292724196277
Specificity: 0.7566807313642757
Threshold: 0.19
Accuracy:  0.7575757575757576

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.39it/s]
Loss: 0.45919364373734656
AUROC: 0.8338582690966196
AUPRC: 0.680221372966512
Sensitivity: 0.7547892720306514
Specificity: 0.7552607502287283
Threshold: 0.18
Accuracy:  0.7551364095655103

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0084.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.34it/s]
Loss: 0.4566381387412548
AUROC: 0.8497183490757995
AUPRC: 0.7136964881901471
Sensitivity: 0.7749576988155669
Specificity: 0.7559774964838256
Threshold: 0.18
Accuracy:  0.7615499254843517

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:20<00:00,  2.32it/s]
Loss: 0.43732675117381076
AUROC: 0.8394659092635242
AUPRC: 0.6878070895305494
Sensitivity: 0.7535121328224776
Specificity: 0.7607502287282708
Threshold: 0.18
Accuracy:  0.7588413607275177

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0085.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.37it/s]
Loss: 0.5187577810138464
AUROC: 0.8441067489130203
AUPRC: 0.7136326785711031
Sensitivity: 0.7597292724196277
Specificity: 0.7644163150492265
Threshold: 0.14
Accuracy:  0.7630402384500745

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.41it/s]
Loss: 0.4888426128854143
AUROC: 0.8351462166649724
AUPRC: 0.6840382598992762
Sensitivity: 0.7541507024265645
Specificity: 0.7614364135407137
Threshold: 0.13
Accuracy:  0.7595149882115191

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0086.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.30it/s]
Loss: 0.5369492992758751
AUROC: 0.8471921770771607
AUPRC: 0.7160919657440054
Sensitivity: 0.7580372250423012
Specificity: 0.7827004219409283
Threshold: 0.11
Accuracy:  0.7754595131644312

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.43it/s]
Loss: 0.5038588963290478
AUROC: 0.8378721143138912
AUPRC: 0.6857162436845922
Sensitivity: 0.7592592592592593
Specificity: 0.7589204025617566
Threshold: 0.1
Accuracy:  0.759009767598518

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0087.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.36it/s]
Loss: 0.45171234384179115
AUROC: 0.8479013614912864
AUPRC: 0.7136744115367777
Sensitivity: 0.7580372250423012
Specificity: 0.7770745428973277
Threshold: 0.24
Accuracy:  0.771485345255837

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.38it/s]
Loss: 0.43268106307121035
AUROC: 0.8376702608846031
AUPRC: 0.6835438232295161
Sensitivity: 0.7554278416347382
Specificity: 0.7543458371454712
Threshold: 0.23
Accuracy:  0.7546311889525092

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0088.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.36it/s]
Loss: 0.4410574194043875
AUROC: 0.8425158436081781
AUPRC: 0.709868880469922
Sensitivity: 0.7681895093062606
Specificity: 0.7658227848101266
Threshold: 0.28
Accuracy:  0.7665176353700944

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.40it/s]
Loss: 0.4258056184712877
AUROC: 0.8331311877862024
AUPRC: 0.6809555821722857
Sensitivity: 0.7592592592592593
Specificity: 0.7538883806038427
Threshold: 0.27
Accuracy:  0.7553048164365106

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0089.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.34it/s]
Loss: 0.5484356246888638
AUROC: 0.8437295484779902
AUPRC: 0.7114152948712689
Sensitivity: 0.7749576988155669
Specificity: 0.7524613220815752
Threshold: 0.08
Accuracy:  0.7590660705414803

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.36it/s]
Loss: 0.5166764570043442
AUROC: 0.8343533358104926
AUPRC: 0.6829148602825305
Sensitivity: 0.7598978288633461
Specificity: 0.7586916742909423
Threshold: 0.08
Accuracy:  0.759009767598518

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0090.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.35it/s]
Loss: 0.49562077037990093
AUROC: 0.8473837520615134
AUPRC: 0.7147333991737266
Sensitivity: 0.7732656514382402
Specificity: 0.7616033755274262
Threshold: 0.14
Accuracy:  0.7650273224043715

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.42it/s]
Loss: 0.4685112174521101
AUROC: 0.8375233256097376
AUPRC: 0.6858692252620324
Sensitivity: 0.7535121328224776
Specificity: 0.7628087831655993
Threshold: 0.14
Accuracy:  0.7603570225665207

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0091.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.36it/s]
Loss: 0.48737313970923424
AUROC: 0.845802366010552
AUPRC: 0.7150538475026883
Sensitivity: 0.7698815566835872
Specificity: 0.7587904360056259
Threshold: 0.14
Accuracy:  0.762046696472926

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.39it/s]
Loss: 0.4595479194788223
AUROC: 0.8378177803951536
AUPRC: 0.6859379395989368
Sensitivity: 0.7496807151979565
Specificity: 0.7669258920402562
Threshold: 0.14
Accuracy:  0.7623779050185248

Intermediate Model:
  ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0092.model
AUROC/AUPRC on Validation Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00,  2.38it/s]
Loss: 0.5536642409861088
AUROC: 0.8524444254059367
AUPRC: 0.7176617089494722
Sensitivity: 0.7749576988155669
Specificity: 0.7756680731364276
Threshold: 0.09
Accuracy:  0.7754595131644312

AUROC/AUPRC on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:20<00:00,  2.34it/s]
Loss: 0.5227712243795395
AUROC: 0.8403402179666495
AUPRC: 0.6891426567867084
Sensitivity: 0.7745849297573435
Specificity: 0.7470265324794144
Threshold: 0.08
Accuracy:  0.7542943752105086


Plot AUROC/AUPRC for Each Intermediate Model
  Epoch with best Validation Loss:      72, 0.4381
  Epoch with best model Test AUROC:     92, 0.8403
  Epoch with best model Test Accuracy:   4, 0.7657

AUROC/AUPRC Plots - Best Model Based on Validation Loss
  Epoch with best Validation Loss:   72, 0.4381
  Best Model Based on Validation Loss:
    ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0072.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:19<00:00,  2.36it/s]
Loss: 0.42678046385024454
AUROC: 0.8322519130797517
AUPRC: 0.6798376367268799
Sensitivity: 0.7624521072796935
Specificity: 0.7543458371454712
Threshold: 0.28
Accuracy:  0.756483664533513
best_model_val_test_auroc: 0.8322519130797517
best_model_val_test_auprc: 0.6798376367268799

AUROC/AUPRC Plots - Best Model Based on Model AUROC
  Epoch with best model Test AUROC:  92, 0.8403
  Best Model Based on Model AUROC:
    ./vitaldb_cache/models/ABP_EEG_ECG_12_RESIDUAL_BLOCKS_128_BATCH_SIZE_1e-04_LEARNING_RATE_1e-01_WEIGHT_DECAY_003_MINS__ALL_MAX_CASES_d3749905_0092.model

Generate Stats Based on Test Data
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:20<00:00,  2.34it/s]
Loss: 0.5227712243795395
AUROC: 0.8403402179666495
AUPRC: 0.6891426567867084
Sensitivity: 0.7745849297573435
Specificity: 0.7470265324794144
Threshold: 0.08
Accuracy:  0.7542943752105086
best_model_auroc_test_auroc: 0.8403402179666495
best_model_auroc_test_auprc: 0.6891426567867084

Total Processing Time: 8434.2100 sec

Hyperparameter search¶

Batch size¶

Holding all other parameters fixed, sweep the batch sizes from 16 to 256:

In [104]:
ENABLE_EXPERIMENT = False
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

batch_sizes = [
    [16, 32, 64, 128, 256]
]

if ENABLE_EXPERIMENT:
    for batch_size in batch_sizes:
        (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
            experimentNamePrefix=None, 
            useAbp=True, 
            useEeg=False, 
            useEcg=False,
            nResiduals=12, 
            skip_connection=False,
            batch_size=batch_size,
            learning_rate=1e-4,
            weight_decay=0.0,
            pos_weight=None,
            max_epochs=MAX_EPOCHS,
            patience=PATIENCE,
            device=device
        )

        if DISPLAY_MODEL_PREDICTION:
            for case_id_to_check in my_cases_of_interest_idx:
                preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                printModelPrediction(case_id_to_check, preds, experimentName)

                if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                    break

Learning Rate¶

Holding all other parameters fixed, sweep the learning rate from 1e-2 to 1e-4:

In [105]:
ENABLE_EXPERIMENT = False
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

learning_rates = [
    1e-4, 1e-3, 1e-2
]

if ENABLE_EXPERIMENT:
    for learning_rate in learning_rates:
        (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
            experimentNamePrefix=None, 
            useAbp=True, 
            useEeg=False, 
            useEcg=False,
            nResiduals=12, 
            skip_connection=False,
            batch_size=128,
            learning_rate=learning_rate,
            weight_decay=0.0,
            pos_weight=None,
            max_epochs=MAX_EPOCHS,
            patience=PATIENCE,
            device=device
        )
    
        if DISPLAY_MODEL_PREDICTION:
            for case_id_to_check in my_cases_of_interest_idx:
                preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                printModelPrediction(case_id_to_check, preds, experimentName)

                if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                    break

Weight decay¶

Holding all other parameters fixed, sweep the weight decay from 1e-3 to 1e0:

In [106]:
ENABLE_EXPERIMENT = False
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

weight_decays = [
    1e-3, 1e-2, 1e-1, 1e0
]

if ENABLE_EXPERIMENT:
    for weight_decay in weight_decays:
        (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
            experimentNamePrefix=None, 
            useAbp=True, 
            useEeg=False, 
            useEcg=False,
            nResiduals=12, 
            skip_connection=False,
            batch_size=128,
            learning_rate=1e-4,
            weight_decay=weight_decay,
            pos_weight=None,
            max_epochs=MAX_EPOCHS,
            patience=PATIENCE,
            device=device
        )
    
        if DISPLAY_MODEL_PREDICTION:
            for case_id_to_check in my_cases_of_interest_idx:
                preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                printModelPrediction(case_id_to_check, preds, experimentName)

                if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                    break

Label balance¶

Holding all other parameters fixed, sweep the pos_weight in BCEWithLogitsLoss from 2 to 4:

In [107]:
ENABLE_EXPERIMENT = False
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

pos_weights = [
    2.0, 4.0
]

if ENABLE_EXPERIMENT:
    for pos_weight in pos_weights:
        (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
            experimentNamePrefix=None, 
            useAbp=True, 
            useEeg=False, 
            useEcg=False,
            nResiduals=12, 
            skip_connection=False,
            batch_size=128,
            learning_rate=1e-4,
            weight_decay=0.0,
            pos_weight=pos_weight,
            max_epochs=MAX_EPOCHS,
            patience=PATIENCE,
            device=device
        )
    
        if DISPLAY_MODEL_PREDICTION:
            for case_id_to_check in my_cases_of_interest_idx:
                preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                printModelPrediction(case_id_to_check, preds, experimentName)

                if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                    break

Ablations¶

Holding all other parameters fixed, perform ablations on the following parameters:

  • Number of Residual Blocks (6, 1)
  • Skip Connection
In [108]:
ENABLE_EXPERIMENT = False
DISPLAY_MODEL_PREDICTION=True
DISPLAY_MODEL_PREDICTION_FIRST_ONLY=True

ablations = [
    # nResiduals, skip_connection
    [6, False],
    [1, False],
    [12, True]
]

if ENABLE_EXPERIMENT:
    for (nResiduals, skip_connection) in ablations:
        (model, best_model_val_loss, best_model_auroc, experimentName) = run_experiment(
            experimentNamePrefix=None, 
            useAbp=True, 
            useEeg=False, 
            useEcg=False,
            nResiduals=nResiduals, 
            skip_connection=skip_connection,
            batch_size=128,
            learning_rate=1e-4,
            weight_decay=0.0,
            pos_weight=None,
            max_epochs=MAX_EPOCHS,
            patience=PATIENCE,
            device=device
        )
    
        if DISPLAY_MODEL_PREDICTION:
            for case_id_to_check in my_cases_of_interest_idx:
                preds = predictionsForModel(case_id_to_check, model, best_model_val_loss, device)
                printModelPrediction(case_id_to_check, preds, experimentName)

                if DISPLAY_MODEL_PREDICTION_FIRST_ONLY:
                    break

Validate pre-trained models¶

Calculate performance metrics on pre-trained models:

In [109]:
# XXX replace hash with actual model name
ENABLE_VALIDATION = False

validate_models = [
    # prediction window, useAbp, useEeg, useEcg, model path
    # 3-minute models
    [3, os.path.join('pretrained', 'abp_3min_hash.model')],
    [3, os.path.join('pretrained', 'ecg_3min_hash.model')],
    [3, os.path.join('pretrained', 'eeg_3min_hash.model')],
    [3, os.path.join('pretrained', 'abp_ecg_3min_hash.model')],
    [3, os.path.join('pretrained', 'abp_eeg_3min_hash.model')],
    [3, os.path.join('pretrained', 'ecg_eeg_3min_hash.model')],
    [3, os.path.join('pretrained', 'abp_ecg_eeg_3min_hash.model')],
    # 5-minute models
    [5, os.path.join('pretrained', 'abp_5min_hash.model')],
    [5, os.path.join('pretrained', 'ecg_5min_hash.model')],
    [5, os.path.join('pretrained', 'eeg_5min_hash.model')],
    [5, os.path.join('pretrained', 'abp_ecg_5min_hash.model')],
    [5, os.path.join('pretrained', 'abp_eeg_5min_hash.model')],
    [5, os.path.join('pretrained', 'ecg_eeg_5min_hash.model')],
    [5, os.path.join('pretrained', 'abp_ecg_eeg_5min_hash.model')],
    # 10-minute models
    [10, os.path.join('pretrained', 'abp_10min_hash.model')],
    [10, os.path.join('pretrained', 'ecg_10min_hash.model')],
    [10, os.path.join('pretrained', 'eeg_10min_hash.model')],
    [10, os.path.join('pretrained', 'abp_ecg_10min_hash.model')],
    [10, os.path.join('pretrained', 'abp_eeg_10min_hash.model')],
    [10, os.path.join('pretrained', 'ecg_eeg_10min_hash.model')],
    [10, os.path.join('pretrained', 'abp_ecg_eeg_10min_hash.model')],
    # 15-minute models
    [15, os.path.join('pretrained', 'abp_15min_hash.model')],
    [15, os.path.join('pretrained', 'ecg_15min_hash.model')],
    [15, os.path.join('pretrained', 'eeg_15min_hash.model')],
    [15, os.path.join('pretrained', 'abp_ecg_15min_hash.model')],
    [15, os.path.join('pretrained', 'abp_eeg_15min_hash.model')],
    [15, os.path.join('pretrained', 'ecg_eeg_15min_hash.model')],
    [15, os.path.join('pretrained', 'abp_ecg_eeg_15min_hash.model')],
]

if ENABLE_VALIDATION:
    test_loader = torch.utils.data.DataLoader(test_dataset)
    loss_func = nn.BCELoss()
    for pred_window, model_path in validate_models:
        if pred_window == PREDICTION_WINDOW:
            model = torch.load(model_path)
            eval_model(model, device, test_loader, loss_func, print_detailed = False)

Results¶

When we complete our experiments, we will build comparison tables that compare a set of measures for each experiment performed. The full set of experiments and measures are listed below.

Results from Final Rubrik¶

  • Table of results (no need to include additional experiments, but main reproducibility result should be included)
  • All claims should be supported by experiment results
  • Discuss with respect to the hypothesis and results from the original paper
  • Experiments beyond the original paper
    • Each experiment should include results and a discussion
  • Ablation Study.

Experiments¶

  • ABP only
  • ECG only
  • EEG only
  • ABP + ECG
  • ABP + EEG
  • ECG + EEG
  • ABP + ECG + EEG

Note: each experiment will be repeated with the following time-to-IOH-event durations:

  • 3 minutes
  • 5 minutes
  • 10 minutes
  • 15 minutes

Note: the above list of experiments will be performed if there is sufficient time and gpu capability to complete that before the submission deadline. Should we experience any constraints on this front, we will reduce our experimental coverage to the following 4 core experiments that are necessary to measure the hypotheses included at the head of this report:

  • ABP only @ 3 minutes
  • ABP + ECG @ 3 minutes
  • ABP + EEG @ 3 minutes
  • ABP + ECG + EEG @ 3 minutes

For additional details please review the "Planned Actions" in the Discussion section of this report.

Measures¶

  • AUROC
  • AUPRC
  • Sensitivity
  • Specificity
  • Threshold
  • Loss Shrinkage

[ TODO for final report - collect data for all measures listed above. ]

[ TODO for final report - generate ROC and PRC plots for each experiment ]

We are collecting a broad set of measures across each experiment in order to perform a comprehensive comparison of all measures listed across all comparable experiments executed in the original paper. However, our key experimental results will be focused on a subset of these results that address the main experiments defined at the beginning of this notebook.

The key experimental result measures will be as follows:

  • For 3 minutes ahead of the predicted IOH event:
    • compare AUROC and AUPRC for ABP only vs ABP+ECG
    • compare AUROC and AUPRC for ABP only vs ABP+EEG
    • compare AUROC and AUPRC for ABP only vs ABP+ECG+EEG

Computational requirements¶

 * Report at least 3 types of requirements such as type of hardware, average runtime for each epoch, total number of trials, GPU hrs used, # training epochs

Model comparison¶

The following table is Table 3 from the original paper which presents the measured values for each signal combination across each of the four temporal predictive categories:

Area under the Receiver-operating Characteristic Curve, Area under the Precision-Recall Curve, Sensitivity, and Specificity of the model in predicting intraoperative hypotension

We have not yet completed the execution of the experiments necessary to determine our reproduced model performance in order determine whether our results are accurately representing those of the original paper. These details are expected to be included in the final report.

As of the draft submission, the reported evaluation measures of our model are too good to be true (all measures are 1.0). We suspect that there is data leakage in the dataset splitting process and will address this in time for the final report.

Discussion¶

Discussion (10) FROM FINAL RUBRIK¶

  • Implications of the experimental results, whether the original paper was reproducible, and if it wasn’t, what factors made it irreproducible
  • “What was easy”
  • “What was difficult”
  • Recommendations to the original authors or others who work in this area for improving reproducibility
  • (specific to our group) "I have communicated with Maciej during OH. The draft looks good and I would expect some explanations/analysis on the final report on why you get 1.0 as AUROC."
    • discuss our bug where we were believing we were sampling dozens of different patient samples but were just training the model on the same segments extracted from the same patient sample over and over. so we were massively overfitting our training data for one patient's data, then unwittingly using the same patient data for validation and testing, thus getting perfect classification during inference.

Feasibility of reproduction¶

Our assessment is that this paper will be reproducible. The outstanding risk is that each experiment can take up to 7 hours to run on hardware within the team (i.e., 7h to run ~70 epochs on a desktop with AMD Ryzen 7 3800X 8-core CPU w/ RTX 2070 SUPER GPU and 32GB RAM). There are a total of 28 experiments (7 different combinations of signal inputs, 4 different time horizons for each combination). Should our team find it not possible to complete the necessary experiments across all of the experiments represented in Table 3 of our selected paper, we will reduce the number of experiments to focus solely on the ones directly related to our hypotheses described in the beginning of this notebook (i.e., reduce the number of combinations of interest to 4: ABP alone, ABP+EEG, ABP+ECG, ABP+ECG+EEG). This will result in a new total of 16 experiments to run.

Planned ablations¶

Our proposal included a collection of potential ablations to be investigated:

  • Remove ResNet skip connection
  • Reduce # of residual blocks from 12 to 6
  • Reduce # of residual blocks from 12 to 1
  • Eliminate dropout from residual block
  • Max pooling configuration
    • smaller size/stride
    • eliminate max pooling

Given the amount of time required to conduct each experiment, our team intends to choose only a small number of ablations from this set. Further, we only intend to perform ablation analysis against the best performing signal combination and time horizon from the reproduction experiments. In order words, we intend to perform ablation analysis against the following training combinations, and only against the models trained with data measured 3 minutes prior to an IOH event:

  • ABP alone
  • ABP + ECG
  • ABP + EEG
  • ABP + ECG + EEG

Time and GPU resource permitting, we will complete a broader range of experiments. For additional details, please see the section below titled "Plans for next phase".

Nature of reproduced results¶

Our team intends to address the manner in which the experimental results align with the published results in the paper in the final submission of this report. The amount of time required to complete model training and result analysis during the preparation of the Draft notebook was not sufficient to complete a large number of experiments.

What was easy? What was difficult?¶

The difficult aspect of the preparation of this draft involved the data preprocessing.

  • First, the source data is unlabelled, so our team was responsible for implementing analysis methods for identifying positive (IOH event occurred) and negative (IOH event did not occur) by running a lookahead analysis of our input training set.
  • Second, the volume of raw data is in excess of 90GB. A non-trivial amount of compute was required to minify the input data to only include the data tracks of interest to our experiments (i.e., ABP, ECG, and EEG tracks).
  • Third, our team found it difficult to trace back to the definition of the jSQI signal quality index referenced in the paper. Multiple references through multiple papers needed to be traversed to understand which variant of the quality index
    • The only available source code related to the signal quality index as referenced by our paper in [5]. Source code was not directly linked from the paper, but the GitHub repository for the corresponding author for reference [5] did result in the identification of MATLAB source code for the signal quality index as described in the referenced paper. That code is available here: https://github.com/cliffordlab/PhysioNet-Cardiovascular-Signal-Toolbox/tree/master/Tools/BP_Tools
    • Our team had insufficient time to port this signal quality index to Python for use in our investigation, or to setup a MATLAB environment in which to assess our source data using the above MATLAB functions, but we expect to complete this as part of our final report.

Suggestions to paper author¶

The most notable suggestion would be to correct the hyperparameters published in Supplemental Table 1. Specifically, the output size for residual blocks 11 and 12 for the ECG and ABP data sets was 496x6. This is a typo, and should read 469x6. This typo became apparent when operating the size down operation within Residual Block 11 and recognizing the tensor dimensions were misaligned.

Additionally, more explicit references to the signal quality index assessment tools should be added. Our team could not find a reference to the MATLAB source code as described in reference [3], and had to manually discover the GitHub profile for the lab of the corresponding author of reference [3] in order to find MATLAB source that corresponded to the metrics described therein.

Plans for next phase¶

Our team plans to accomplish the following goals in service of preparing the Final Report:

  • Implement the jSQI filter to remove any training data with aberrent signal quality per the threshold defined in our original paper.
  • Execute the following experiments:
    • Measure predictive quality of the model trained solely with ABP data at 3 minutes prior to IOH events.
    • Measure predictive quality of the model trained with ABP+ECG data at 3 minutes prior to IOH events.
    • Measure predictive quality of the model trained with ABP+EEG data at 3 minutes prior to IOH events.
    • Measure predictive quality of the model trained with ABP+ECG+EEG data at 3 minutes prior to IOH events.
  • Gather our measures for these experiments and perform a comparison against the published results from our selected paper and determine whether or not we are succesfully reproducing the results outlined in the paper.
  • Ablation analysis:
    • Execute the following ablation experiments:
      • Repeat the four experiments described above while reducing the numnber of residual blocks in the model from 12 to 6.
  • Time- and/or GPU-resource permitting, we will complete the remaining 24 experiments as described in the paper:
    • Measure predictive quality of the model trained solely with ABP data at 5, 10, and 15 minutes prior to IOH events.
    • Measure predictive quality of the model trained with ABP+ECG data at 5, 10, and 15 minutes prior to IOH events.
    • Measure predictive quality of the model trained with ABP+EEG data at 5, 10, and 15 minutes prior to IOH events.
    • Measure predictive quality of the model trained with ABP+ECG+EEG data at 5, 10, and 15 minutes prior to IOH events.
    • Measure predictive quality of the model trained solely with ECG data at 3, 5, 10, and 15 minutes prior to IOH events.
    • Measure predictive quality of the model trained solely with EEG data at 3, 5, 10, and 15 minutes prior to IOH events.
    • Measure predictive quality of the model trained with ECG+EEG data at 3, 5, 10, and 15 minutes prior to IOH events.
    • Additional ablation experiments:
      • For the four core experiments (ABP, ABP+ECG, ABP+EEG, ABP+ECG+EEG each trained on event data occurring 3 minutes prior to IOH events), perform the following ablations:
        • Repeat experiment while eliminating dropout from every residual block
        • Repeat experiment while removing the skip connection from every residual block
        • Repeat the four experiments described above while reducing the numnber of residual blocks in the model from 12 to 1.

References¶

  1. Jo Y-Y, Jang J-H, Kwon J-m, Lee H-C, Jung C-W, Byun S, et al. “Predicting intraoperative hypotension using deep learning with waveforms of arterial blood pressure, electroencephalogram, and electrocardiogram: Retrospective study.” PLoS ONE, (2022) 17(8): e0272055 https://doi.org/10.1371/journal.pone.0272055
  2. Hatib, Feras, Zhongping J, Buddi S, Lee C, Settels J, Sibert K, Rhinehart J, Cannesson M “Machine-learning Algorithm to Predict Hypotension Based on High-fidelity Arterial Pressure Waveform Analysis” Anesthesiology (2018) 129:4 https://doi.org/10.1097/ALN.0000000000002300
  3. Bao, X., Kumar, S.S., Shah, N.J. et al. "AcumenTM hypotension prediction index guidance for prevention and treatment of hypotension in noncardiac surgery: a prospective, single-arm, multicenter trial." Perioperative Medicine (2024) 13:13 https://doi.org/10.1186/s13741-024-00369-9
  4. Lee, HC., Park, Y., Yoon, S.B. et al. VitalDB, a high-fidelity multi-parameter vital signs database in surgical patients. Sci Data 9, 279 (2022). https://doi.org/10.1038/s41597-022-01411-5
  5. Li Q., Mark R.G. & Clifford G.D. "Artificial arterial blood pressure artifact models and an evaluation of a robust blood pressure and heart rate estimator." BioMed Eng OnLine. (2009) 8:13. pmid:19586547 https://doi.org/10.1186/1475-925X-8-13
  6. Park H-J, "VitalDB Python Example Notebooks" GitHub Repository https://github.com/vitaldb/examples/blob/master/hypotension_art.ipynb

Public GitHub Repo (5)¶

  • Publish your code in a public repository on GitHub and attach the URL in the notebook.
  • Make sure your code is documented properly.
    • A README.md file describing the exact steps to run your code is required.
    • Check “ML Code Completeness Checklist” (https://github.com/paperswithcode/releasing-research-code)
    • Check “Best Practices for Reproducibility” (https://www.cs.mcgill.ca/~ksinha4/practices_for_reproducibility/)

Video Presentation (Requirements from Rubrik)¶

Walkthrough of the notebook, no need to make slides. We expect a well-timed, well-presented presentation. You should clearly explain what the original paper is about (what the general problem is, what the specific approach taken was, and what the results claimed were) and what you encountered when you attempted to reproduce the results. You should use the time given to you and not too much (or too little).

  • <= 4 mins
  • Explain the general problem clearly
  • Explain the specific approach taken in the paper clearly
  • Explain reproduction attempts clearly
In [110]:
time_delta = np.round(timer() - global_time_start, 3)
print(f'Total Notebook Processing Time: {time_delta:.4f} sec')
Total Notebook Processing Time: 29604.0460 sec